Echo Dot (3rd Gen) - Smart speaker with Alexa - Charcoal

Use your voice to play a song, artist, or genre through Amazon Music, Apple Music, Spotify, Pandora, and others. With compatible Echo devices in different rooms, you can fill your whole home with music.

Buy Now

Wireless Rechargeable Battery Powered WiFi Camera.

Wireless Rechargeable Battery Powered WiFi Camera is home security camera system lets you listen in and talk back through the built in speaker and microphone that work directly through your iPhone or Android Mic.

Buy Now

Automate post-disaster checks and foster offline communication – IBM Developer



Drones have become essential tools for first responders in search-and-rescue missions. In this code pattern, you learn how to use visual recognition to detect and tag S.O.S. messages from aerial images.


2017 was a year of record breaking natural disasters. From Hurricanes Maria, Irma, and Harvey, to the devastating forest fires in California. People all over the world suffer from tsunamis, tornadoes, floods, landslides, earthquakes, and volcanic eruptions – not to mention all of the man-made disasters.

Aerial images have become crucial for search-and-rescue missions and disaster relief operations. However, not everyone has access to a helicopter or satellites, therefore drones have become an essential tool to capture aerial photos quickly and cheaply.

This code pattern shows you how to complete the following tasks:

  • Use Cloud Annotation to train a visual recognition model to identify universal aid symbols (like “S.O.S”) using object detection.
  • Stream and capture the video feed from a Tello drone.
  • Configure a web app to run prediction against the video feed and view a dashboard of the results.


Post-disaster visual recognition architecture flow diagram

  1. The user generates sample images using Lens Studio.
  2. The user uploads the images to Cloud Annotations, which trains a model and then exports a TensorFlow.js model.
  3. The user adds the TensorFlow.js model to the web application.
  4. The user connects the Tello drone to the computer and starts the web application.
  5. The drone video feed is captured by the web application.
  6. The video frames are analyzed by the TensorFlow.js model.
  7. The web app UI displays the visual recognition analysis.


Ready to try this out? Find detailed technical steps for this code pattern in the and

  1. Use augmented reality to generate the imageset.
  2. Train the model.
  3. Deploy the dashboard.

Pedro Cruz

Read More


Please enter your comment!
Please enter your name here