IoT
Exploring AWS IoT Services - Part 3
In the final installment of our AWS IoT journey, this post explores how to use IoT Jobs to update configuration information on a fleet of devices.
By Carlos Lemus and Nick Locascio
One of the needs we have identified at Trek10 when developing systems that use AWS IoT Greengrass is setting up a development environment to test our deployments and components. We typically do this by installing the Greengrass Client Software on an Amazon EC2 instance. While the Greengrass software is primarily designed to be run on a device at the edge, having a setup running in the cloud allows us to centralize the Greengrass development environment while leveraging all the network and security options made available by AWS.
A Greengrass development environment on EC2 allows our team distributed all over the US to easily develop components, test deployments, and debug applications through easy access to the Greengrass device logs. Nevertheless, there are some challenging situations we’ve encountered along the way when installing and running the Greengrass Client Software on EC2–so we decided to share our findings and insights with the community through this post.
Several of our projects use Greengrass as a message broker between client devices and AWS IoT Core so we will use this example to guide our conversation about Greengrass on EC2. Nevertheless, these lessons can be applied towards any other Greengrass application such as container hosting, AWS Lambda Functions, Machine Learning inference, and so on.
There are three areas we believe to be fairly involved when setting up an AWS IoT Greengrass development environment that we will cover in this post:
Setting up the Greengrass Software on an EC2 instance
Enabling local client device Communications
Let’s get started.
Before we do any type of activity with AWS IoT Greengrass, we must ensure we have a service role attached to the Greengrass service. The Greengrass service role is an IAM role that allows Greengrass, among other things, to perform all the operations needed for device verification and management. Greengrass role setup instructions as well as more information about the role are described in the AWS Documentation.
Once we’ve ensured the Greengrass Service Role is set up, we can move on to launching the EC2 instance where the Greengrass Client Software will be installed.
Setting up an EC2 instance is generally a straightforward process in AWS. We can launch an EC2 instance from the console and make sure to allow inbound traffic on port 8883 either from trusted private networks or from the open Internet depending on the security requirements.
While we typically prefer to use AWS Systems Manager to access EC2 instances instead of relying on SSH keys, doing so in this case requires quite a bit of scaffolding that we have already covered in a different post. To keep this one focused, we assume a simple setup with SSH keys as the EC2 instance connectivity approach, which means we’ll also need to allow inbound traffic on port 22.
You can learn how to launch an EC2 instance in this article and how to configure security group access rules in this article–both in the AWS Documentation. But at a high level, the steps required to quickly launch our instance are as follows:
Navigate to the EC2 console
For development purposes, we typically stick with a t2.micro instance size, though this is entirely application/workload specific.
Create a new key pair and download it
Create a security group that allows inbound traffic on port 8883 from anywhere as well as inbound traffic on port 22 from a trusted IP address
You can also use the AWS Command Line Interface (CLI) to create this instance as explained in this page of the AWS Documentation.
Take note of the public IP address assigned to your EC2 instance as we will use it later on.
Once the instance is up and running, the next task is to set up the Greengrass Client software on our instance. You will need to access your EC2 instance using your SSH keypair, and then follow the steps outlined here to set up the software.
The easiest way to verify a successful deployment is to navigate to the AWS IoT Core service page in the AWS Console and find the list of Greengrass Core Devices (Manage > Greengrass Devices > Core Devices) to verify the device is listed as active.
That’s all that’s needed to launch the appropriate EC2 instance and set it up with the Greengrass Software! Next up, we will see how to get local client devices connected to the Greengrass deployment and communicating with AWS.
From this point on, all our examples will be done in the AWS CLI to promote language-agnostic automation.
Once we’ve successfully installed the Greengrass Software on an EC2 instance, we can begin configuring the Greengrass deployment to allow local client devices to communicate with the cloud. To do this, we need to update the existing Greengrass deployment with five components at minimum:
Greengrass nucleus
MQTT 3.1.1 broker (Moquette)
IP detector
Client device auth
MQTT bridge
We can create such a deployment using the create-deployment
command:
aws greengrassv2 create-deployment
--cli-input-json file://cli-deployment.json
where cli-deployment.json
is:
{
"targetArn": <target_arn>,
"deploymentName": "Deployment for MyGreengrassCore",
"components": {
"aws.greengrass.Nucleus": {
"componentVersion": "2.6.0",
},
"aws.greengrass.clientdevices.IPDetector": {
"componentVersion": "2.1.2",
},
"aws.greengrass.clientdevices.Auth": {
"componentVersion": "2.2.0",
"configurationUpdate": {"merge": <auth_config>},
},
"aws.greengrass.clientdevices.mqtt.Moquette": {
"componentVersion": "2.2.0",
},
"aws.greengrass.clientdevices.mqtt.Bridge": {
"componentVersion": "2.2.0",
"configurationUpdate": {"merge": <bridge_config>},
}
}
}
The target_arn
will be the ARN of the IoT Thing previously created by the Greengrass Software installer (you can find it in the IoT Core console page under Manage > All Devices > Things).
The required configurations for the Client Device Auth and MQTT bridge components are explained next.
According to the AWS Documentation, the Client Device Auth component “authenticates client devices and authorizes client device actions.” In other words, it allows us to specify which client devices can connect and exchange data with the Greengrass MQTT bridge.
The following sample policy allows all devices to connect, publish, and subscribe to the Greengrass MQTT bridge. Note that we don’t recommend such an open policy for your business-critical workloads and you should tailor your policy to your use case by following the documentation linked above!
{
"deviceGroups": {
"formatVersion": "2021-03-05",
"definitions": {
"MyDeviceGroup": {
"selectionRule": "thingName: *",
"policyName": "MyClientDevicePolicy"
}
},
"policies": {
"MyClientDevicePolicy": {
"AllowConnect": {
"statementDescription": "Allow client devices to connect.",
"operations": [
"mqtt:connect"
],
"resources": [
"*"
]
},
"AllowPublish": {
"statementDescription": "Allow client devices to publish to all topics.",
"operations": [
"mqtt:publish"
],
"resources": [
"*"
]
},
"AllowSubscribe": {
"statementDescription": "Allow client devices to subscribe to all topics.",
"operations": [
"mqtt:subscribe"
],
"resources": [
"*"
]
}
}
}
}
}
The MQTT bridge component requires an mqttTopicMapping in order to route messages from client devices to IoT Core. This is a simple but crucial configuration step:
{
"mqttTopicMapping": {
"IotTopicMapping": {
"topic": "test-gg",
"source": "LocalMqtt",
"target": "IotCore"
}
}
}
So now that we know what the configuration for both components should look like and we place it into the appropriate fields in the cli-deployment.json, we can revisit our deployment command:
aws greengrassv2 create-deployment
--cli-input-json file://cli-deployment.json
We will get a deploymentId in the response, which we can then use to check on the status of the deployment:
aws greengrassv2 get-deployment
--deployment-id <deploymentId>
When the deployment is complete, we will get “COMPLETED” as the deploymentStatus. For example:
{
"targetArn": "<target_arn>",
"revisionId": "1",
"deploymentId": "28ffe314-66bb-4ab2-a11d-aab35b18eebd",
"deploymentName": "test-deployment",
"deploymentStatus": "COMPLETED",
…
}
Once the deployment finishes we must update the connectivity information of our Greengrass core definition so that clients can learn the Greengrass Core’s IP address and port number (this is done through the Greengrass Discovery API). We can achieve this through a simple CLI call, but must be sure to pass in the thing_name assigned to the Greengrass Core devices and the public IP address of our Greengrass EC2 instance.
aws greengrassv2 update-connectivity-info --thing-name <thing_name> --cli-input-json file://core-device-connectivity-info.json
Where core-device-connectivity-info.json
is:
{
"connectivityInfo": [
{
"hostAddress": "<public_ip>",
"portNumber": 8883,
}
]
}
Lastly, we test our entire deployment by running our Trek10 Device Simulator tool (a specialized tool we use at Trek10 to deliver and test end-to-end IoT systems for clients) to publish data to the Greengrass Core MQTT endpoint, though you could replicate this by using an open source library such as the AWS IoT Greengrass PubSub SDK for Python.
Here is a sample screenshot of the data being published to the core:
We can also check the logs on the Greengrass instance by referencing the /greengrass/v2/logs/greengrass.log
file to verify that messages are being routed through the core device to AWS:
Finally, we can navigate through the console to the AWS IoT Core’s MQTT test client to ensure that the messages make it to AWS:
Using Amazon EC2 instances for Greengrass development and testing has helped us immensely by allowing our distributed team to easily access and develop on a remote Greengrass deployment. Leveraging EC2 versatility to host the Greengrass client software allows users to develop components and test deployments without the need of a physical device.
In the final installment of our AWS IoT journey, this post explores how to use IoT Jobs to update configuration information on a fleet of devices.