• Greengrass Configuring an Intel Atom


    https://docs.aws.amazon.com/greengrass/latest/developerguide/ml-dlc-console.html#atom-lambda-dlc-config

    Configuring an Intel Atom

    To run this tutorial on an Intel Atom device, you provide source images and configure the Lambda function. To use the GPU for inference, you must have OpenCL version 1.0 or later installed on your device. You must also add a local device resource.

    1. Download static PNG or JPG images for the Lambda function to use for image classification. The example works best with small image files.

      Save your image files in the directory that contains the inference.py file (or in a subdirectory of this directory). This is in the Lambda function deployment package that you upload in Step 3: Create an Inference Lambda Function.

      Note

      If you are using AWS DeepLens, you can choose to instead use the onboard camera or mount your own camera to capture images and perform inference on them. However, we strongly recommend you start with static images first.

    2. Edit the configuration of the Lambda function. Follow the procedure in Step 4: Add the Lambda Function to the Greengrass Group.

      1. Increase the Memory limit value to 3000 MB.

      2. Increase the Timeout value to 2 minutes. This ensures that the request does not time out too early. It takes a few minutes after setup to run inference.

      3. For Read access to /sys directory, choose Enable.

      4. For Lambda lifecycle, choose Make this function long-lived and keep it running indefinitely.

    3. Add the required local device resource.

      1. On the group configuration page, choose Resources.

        
                The group configuration page with Resources highlighted.
      2. On the Local tab, choose Add a local resource.

      3. Define the resource:

        • For Resource name, enter renderD128.

        • For Resource type, choose Device.

        • For Device path, enter /dev/dri/renderD128.

        • For Group owner file access permission, choose Automatically add OS group permissions of the Linux group that owns the resource.

        • For Lambda function affiliations, grant Read and write access to your Lambda function.

    Configuring an NVIDIA Jetson TX2

    To run this tutorial on an NVIDIA Jetson TX2, you provide source images and configure the Lambda function. To use the GPU for inference, you must install CUDA 9.0 and cuDNN 7.0 on your device when you image your board with Jetpack 3.3. You must also add local device resources.

    To learn how to configure your Jetson so you can install the AWS IoT Greengrass Core software, see Setting Up Other Devices.

    1. Download static PNG or JPG images for the Lambda function to use for image classification. The example works best with small image files.

      Save your image files in the directory that contains the inference.py file (or in a subdirectory of this directory). This is in the Lambda function deployment package that you upload in Step 3: Create an Inference Lambda Function.

      Note

      You can instead choose to instrument a camera on the Jetson board to capture the source images. However, we strongly recommend you start with static images first.

    2. Edit the configuration of the Lambda function. Follow the procedure in Step 4: Add the Lambda Function to the Greengrass Group.

      1. Increase the Memory limit value. To use the provided model in GPU mode, use 2048 MB.

      2. Increase the Timeout value to 5 minutes. This ensures that the request does not time out too early. It takes a few minutes after setup to run inference.

      3. For Lambda lifecycle, choose Make this function long-lived and keep it running indefinitely.

      4. For Read access to /sys directory, choose Enable.

    3. Add the required local device resources.

      1. On the group configuration page, choose Resources.

        
                The group configuration page with Resources highlighted.
      2. On the Local tab, choose Add a local resource.

      3. Define each resource:

        • For Resource name and Device path, use the values in the following table. Create one device resource for each row in the table.

        • For Resource type, choose Device.

        • For Group owner file access permission, choose Automatically add OS group permissions of the Linux group that owns the resource.

        • For Lambda function affiliations, grant Read and write access to your Lambda function.

          Name

          Device path

          nvhost-ctrl

          /dev/nvhost-ctrl

          nvhost-gpu

          /dev/nvhost-gpu

          nvhost-ctrl-gpu

          /dev/nvhost-ctrl-gpu

          nvhost-dbg-gpu

          /dev/nvhost-dbg-gpu

          nvhost-prof-gpu

          /dev/nvhost-prof-gpu

          nvmap

          /dev/nvmap

    Troubleshooting AWS IoT Greengrass ML Inference

    If the test is not successful, you can try the following troubleshooting steps. Run the commands in your Raspberry Pi terminal.

    Check error logs

    1. Switch to the root user and navigate to the log directory. Access to AWS IoT Greengrass logs requires root permissions.

       
      sudo su
      cd /greengrass/ggc/var/log
    2. Check runtime.log for any errors.

       
      cat system/runtime.log | grep 'ERROR'

      You can also look in your user-defined Lambda function log for any errors:

       
      cat user/your-region/your-account-id/lambda-function-name.log | grep 'ERROR'

      For more information, see Troubleshooting with Logs.

    Verify the Lambda function is successfully deployed

    1. List the contents of the deployed Lambda in the /lambda directory. Replace the placeholder values before you run the command.

       
      cd /greengrass/ggc/deployment/lambda/arn:aws:lambda:region:account:function:function-name:function-version
      ls -la
    2. Verify that the directory contains the same content as the optimizedImageClassification.zip deployment package that you uploaded in Step 3: Create an Inference Lambda Function.

      Make sure that the .py files and dependencies are in the root of the directory.

    Verify the inference model is successfully deployed

    1. Find the process identification number (PID) of the Lambda runtime process:

       
      ps aux | grep lambda-function-name

      In the output, the PID appears in the second column of the line for the Lambda runtime process.

    2. Enter the Lambda runtime namespace. Be sure to replace the placeholder pid value before you run the command.

      Note

      This directory and its contents are in the Lambda runtime namespace, so they aren't visible in a regular Linux namespace.

       
      sudo nsenter -t pid -m /bin/bash
    3. List the contents of the local directory that you specified for the ML resource.

      Note

      If your ML resource path is something other than ml_model, you must substitute that here.

       
      cd /ml_model
      ls -ls

      You should see the following files:

       
          56 -rw-r--r-- 1 ggc_user ggc_group     56703 Oct 29 20:07 model.json
      196152 -rw-r--r-- 1 ggc_user ggc_group 200855043 Oct 29 20:08 model.params
         256 -rw-r--r-- 1 ggc_user ggc_group    261848 Oct 29 20:07 model.so
          32 -rw-r--r-- 1 ggc_user ggc_group     30564 Oct 29 20:08 synset.txt

    Lambda function cannot find /dev/dri/renderD128

    This can occur if OpenCL cannot connect to the GPU devices it needs. You must create device resources for the necessary devices for your Lambda function.

    Next Steps

    Next, explore other optimized models. For information, see the Amazon SageMaker Neo documentation.

  • 相关阅读:
    Django集成Markdown编辑器【附源码】
    Django+JWT实现Token认证
    Docker环境的持续部署优化实践
    2018-行远自迩,登高自卑
    SVN Hooks的介绍及使用
    Django开发密码管理表实例【附源码】
    Django+Echarts画图实例
    Django使用Signals监测model字段变化发送通知
    运维效率之数据迁移自动化
    Python之路(第三十四篇) 网络编程:验证客户端合法性
  • 原文地址:https://www.cnblogs.com/cloudrivers/p/11898181.html
Copyright © 2020-2023  润新知