Header image

Today we released Drogue Cloud 0.4.0. This is a huge step forward in so many areas, let's take a look.


So far, we didn't really release anything. We created tagged versions, so that you don't need to rely on latest, which might change any second. We had a release pipeline in the CI. Still, it just didn't feel ready enough. This time it is different.


In previous posts we've seen how to run drogue-cloud, how to use LoRaWAN in rust, and the introduction of the drogue-device. In this post, we'll tie this together and walk through the process of running LoRa on drogue-device, sending data to drogue-cloud.


You might have noticed, we talk about Rust a lot. Mostly in the context of embedded programming. However, we also have a few bits and pieces in the cloud side of things. If you have a device that gathers data, you might want to send that data somewhere for further processing and storage. Now, if we are using Rust to implement the embedded side, why not use it on the backend as well?


Trying to bring reusable and efficient components to embedded Rust has been a challenge for our team. We think we've started to make headway, and want to introduce the Drogue Device project.


Good news, everyone! Google Summer of Code 2021 is coming up. This gives you the chance to throw yourself at some horrifying tech problems, learn some new skills, and maybe get to know a few more memes. We are participating, how about you?


So far when we talked about serverless IoT cloud we focused only on one way communication, devices sending telemetry data to the cloud. But in order to have a complete IoT solution, we need to be able to control our devices as well, by sending commands back to them.


LoRa is a low power long range wireless protocol that operates in a lower frequency spectrum than WiFi, ZigBee and Bluetooth. This enables IoT use cases not possible with the shorter range technologies. And, you can use Rust!


Pushing temperature readings in JSON structures to the cloud is fun, but more fun is to restart your pods by saying: "Hey Rodney, …". It also is a nice demo, and a good test, to see what fails when your Content-Type is audio/wav instead of application/json.


If we start living the async lifestyle, we can potentially get more use out of our limited hardware resources. Maybe not, but it's worth exploring. Let's explore.