预测示例 (Python)

本章节包含2部分内容:(1) 运行 Python 示例程序;(2) Python 预测程序开发说明

运行 Python 示例程序

1. 安装 Python 预测库

请参照 官方主页-快速安装 页面进行自行安装或编译,当前支持 pip/conda 安装,docker镜像 以及源码编译等多种方式来准备 Paddle Inference 开发环境。

2. 准备预测部署模型

下载 ResNet50 模型后解压,得到 Paddle 预测格式的模型,位于文件夹 ResNet50 下。如需查看模型结构,可将 inference.pdmodel 文件重命名为 __model__,然后通过模型可视化工具 Netron 打开。

wget https://paddle-inference-dist.bj.bcebos.com/Paddle-Inference-Demo/resnet50.tgz
tar zxf resnet50_model.tar.gz

# 获得模型目录即文件如下
resnet50/
├── inference.pdmodel
├── inference.pdiparams.info
└── inference.pdiparams

3. 准备预测部署程序

将以下代码保存为 python_demo.py 文件:

import argparse
import numpy as np

# 引用 paddle inference 预测库
import paddle.inference as paddle_infer

def main():
    args = parse_args()

    # 创建 config
    config = paddle_infer.Config(args.model_file, args.params_file)

    # 根据 config 创建 predictor
    predictor = paddle_infer.create_predictor(config)

    # 获取输入的名称
    input_names = predictor.get_input_names()
    input_handle = predictor.get_input_handle(input_names[0])

    # 设置输入
    fake_input = np.random.randn(args.batch_size, 3, 318, 318).astype("float32")
    input_handle.reshape([args.batch_size, 3, 318, 318])
    input_handle.copy_from_cpu(fake_input)

    # 运行predictor
    predictor.run()

    # 获取输出
    output_names = predictor.get_output_names()
    output_handle = predictor.get_output_handle(output_names[0])
    output_data = output_handle.copy_to_cpu() # numpy.ndarray类型
    print("Output data size is {}".format(output_data.size))
    print("Output data shape is {}".format(output_data.shape))

def parse_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("--model_file", type=str, help="model filename")
    parser.add_argument("--params_file", type=str, help="parameter filename")
    parser.add_argument("--batch_size", type=int, default=1, help="batch size")
    return parser.parse_args()

if __name__ == "__main__":
    main()

4. 执行预测程序

# 参数输入为本章节第2步中下载的 ResNet50 模型
python python_demo.py --model_file ./resnet50/inference.pdmodel --params_file ./resnet50/inference.pdiparams --batch_size 2

成功执行之后,得到的预测输出结果如下:

# 程序输出结果如下
grep: warning: GREP_OPTIONS is deprecated; please use an alias or script
I1211 11:12:40.869632 20942 analysis_predictor.cc:139] Profiler is deactivated, and no profiling report will be generated.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [attention_lstm_fuse_pass]
--- Running IR pass [seqconv_eltadd_relu_fuse_pass]
--- Running IR pass [seqpool_cvm_concat_fuse_pass]
--- Running IR pass [mul_lstm_fuse_pass]
--- Running IR pass [fc_gru_fuse_pass]
--- Running IR pass [mul_gru_fuse_pass]
--- Running IR pass [seq_concat_fc_fuse_pass]
--- Running IR pass [fc_fuse_pass]
I1211 11:12:41.327713 20942 graph_pattern_detector.cc:100] ---  detected 1 subgraphs
--- Running IR pass [repeated_fc_relu_fuse_pass]
--- Running IR pass [squared_mat_sub_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
I1211 11:12:41.550542 20942 graph_pattern_detector.cc:100] ---  detected 53 subgraphs
--- Running IR pass [conv_transpose_bn_fuse_pass]
--- Running IR pass [conv_transpose_eltwiseadd_bn_fuse_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I1211 11:12:41.594254 20942 analysis_predictor.cc:537] ======= optimize end =======
I1211 11:12:41.594414 20942 naive_executor.cc:102] ---  skip [feed], feed -> data
I1211 11:12:41.595824 20942 naive_executor.cc:102] ---  skip [AddmmBackward190.fc.output.1.tmp_1], fetch -> fetch
Output data size is 1024
Output data shape is (2, 512)

Python 预测程序开发说明

使用 Paddle Inference 开发 Python 预测程序仅需以下五个步骤:

(1) 引用 paddle inference 预测库

import paddle.inference as paddle_infer

(2) 创建配置对象,并根据需求配置,详细可参考 Python API 文档 - Config

# 创建 config,并设置预测模型路径
config = paddle_infer.Config(args.model_file, args.params_file)

(3) 根据Config创建预测对象,详细可参考 Python API 文档 - Predictor

predictor = paddle_infer.create_predictor(config)

(4) 设置模型输入 Tensor,详细可参考 Python API 文档 - Tensor

# 获取输入的名称
input_names = predictor.get_input_names()
input_handle = predictor.get_input_handle(input_names[0])

# 设置输入
fake_input = np.random.randn(args.batch_size, 3, 318, 318).astype("float32")
input_handle.reshape([args.batch_size, 3, 318, 318])
input_handle.copy_from_cpu(fake_input)

(5) 执行预测,详细可参考 Python API 文档 - Predictor

predictor.run()

(5) 获得预测结果,详细可参考 Python API 文档 - Tensor

output_names = predictor.get_output_names()
output_handle = predictor.get_output_handle(output_names[0])
output_data = output_handle.copy_to_cpu() # numpy.ndarray类型