IoT

4 results

Australian Start-Up Ynomia Is Building an IoT Platform to Transform the Construction Industry and its Hostile Environments

The trillion dollar construction industry has not yet experienced the same revolution in technology you might have expected. Low levels of R&D and difficult working environments have led to a lack of innovation and fundamental improvements have been slow. But one Australian start-up is changing that by building an Internet of Things (IoT) platform to harness construction and jobsite data in real time. “Productivity in construction is down there with hunting and fishing as one of the least productive industries per capita in the entire world. It's a space that's ripe for people to come in and really help,” explains Rob Postill , CTO at Ynomia. Ynomia has already been closely involved with many prestigious construction projects, including the residential N06 development in London’s famous 2012 Olympic Village. It was also integral to the construction of the Victoria University Tower in Australia. Link to Podcast Episode Here “These projects involve massive outflow of money: think about glass facades on modern buildings, which can represent 20-30 percent of the overall project cost. They are largely produced in China and can take 12 weeks to get here,” says Postill. “Meanwhile, the plasterer, the plumber, the electrician are all waiting for those glass facades to be put on so it is safe for them to work. If you get it wrong, you can go in the deep red very quickly.” To tackle these longstanding challenges, Ynomia aims to address the lack of connectivity, transparency and data management on construction sites, which has traditionally resulted in the inefficient use of critical personnel, equipment and materials; compressed timelines; and unpredictable cash flows. To optimize productivity, Ynomia offers a simple end-to-end technology solution that creates a Connected Jobsite. Helping teams manage materials, tools, and people across the worksite in real time. IOT in a Hostile Environment The deployment of technology in construction is often fraught with risk. As a result, construction sites are still largely run on paper, such as blueprints, diagrams and models as well as the more traditional invoices and filing. At the same time, there is a constant need to track progress and monitor massive volumes of information across the entire supply chain. Engineers, builders, electricians, plumbers, and all the other associated professionals need to know what they need to do, where they need to be, and when they need to start. “The environment is hostile to technology like GPS, computers, and mobile phone reception because you have a lot of Faraday cages and lots of water and dust,” explains Postill. “You can't have somebody wandering around a construction site with a laptop; it'll get trashed pretty quickly." Enter MongoDB Atlas “On a site, you might be talking about materials, then if you add to that plant & equipment, or bins, or tools etc, you're rapidly getting into thousands and thousands of tags, talking all the time, every day,” said Postill. That means thousands of tags now send millions of readings on Ynomia building sites around the world. All these IoT data packets must be stored efficiently and accurately so Ynomia can reassemble the history of what has happened and track tagged inventory, personnel, and vehicles around the site. Many of the tag events are also safety critical so accuracy is a vital component and packets can't be missed. To address these needs Ynomia was looking for a database that was scalable, flexible, resilient and could easily handle a wide variety of fast-changing sensor data captured from multiple devices. The final component Postill was looking for in a database layer was freedom: a database that didn't lock them into a single cloud platform as they were still in the early stages of assessing cloud partners. The Commonwealth Scientific and Industrial Research Organisation , which Postill had worked with in the past, suggested MongoDB , a general purpose, document-based database built for modern applications. “The most important factor was that the database is event-driven, which I knew would be difficult in the traditional relational model. We deal with millions of tag readings a day, which is a massive wall of data,” said Postill. A Cloud Database Ynomia is using MongoDB Atlas , the global cloud database service, now hosted on Microsoft Azure. Atlas offers best-in-class automation and proven practices that combine availability, scalability, and compliance with the most demanding data security and privacy standards. “When we started we didn't know enough about the problem and we didn't want to be constrained," explained Postill. "MongoDB Atlas gives us a cloud environment that moves with us. It allows us to understand what is happening and make changes to the architecture as we go." Postill says this combination of flexibility and management tooling also allows his developers to focus on business value not undifferentiated code. One example Postill gave was cluster administration: "Cluster administration for a start-up like us is wasted work," he said. "We’re not solving the customer's problem. We're not moving anything on. We’re focusing on the wrong thing. For us to be able to just make that problem go away is huge. Why wouldn’t you?" Atlas also gives Ynomia the option to spin out new clusters seamlessly anywhere in the world. This allows customers to keep data local to their construction site, improving latency and helping solve for regional data regulations. Real Time Analytics The company has also deployed MongoDB Charts, which takes this live data and automatically provides a real time view. Charts is the fastest and easiest way to visualize event data directly from MongoDB in order to act instantly and decisively based on the real-time insights generated by event-driven architecture. It allows Ynomia to share dashboards so all the right people can see what they need to and can collaborate accordingly. “Charts enables us to quickly visualize information without having to build more expensive tools, both internally and externally, to examine our data,” comments Postill. “As a startup, we go through this journey of: what are we doing and how are we doing it? There's a lot of stuff we are finding out along the way on how we slice and re-slice our data using Charts.” A Platform for Future Growth Ynomia is targeting a huge market and is set for ambitious growth in the coming years. How the platform, and its underlying architecture, can continue to scale and evolve will be crucial to enabling that business growth. “We do anything we can to keep things simple,” concluded Postill. “We pick technology partners that save us from spending time we shouldn't spend so we can solve real problems. We pick technologies that roll with the punches and that's MongoDB.” When we started we didn't know enough about the problem and we didn't want to be constrained," explained Postill. "MongoDB Atlas gives us a cloud environment that moves with us. It allows us to understand what is happening and make changes to the architecture as we go. Rob Postill, CTO, Ynomia

February 23, 2021

Capture IOT Data With MongoDB Stitch in 5 Minutes

Capturing IOT data is a complex task for 2 main reasons: We have to deal with a huge amount of data so we need a rock solid architecture. While keeping a bulletproof security level. First, let’s have a look at a standard IOT capture architecture: On the left, we have our sensors. Let’s assume they can push data every second over TCP using a POST and let’s suppose we have a million of them. We need an architecture capable to handle a million queries per seconds and able to resist any kind of network or hardware failure. TCP queries need to be distributed evenly to the application servers using load balancers and finally, the application servers are able to push the data to our multiple Mongos routers from our MongoDB Sharded Cluster . As you can see, this architecture is relatively complex to install. We need to: buy and maintain a lot of servers, make security updates on a regular basis of the Operating Systems and applications, have an auto-scaling capability (reduce maintenance cost & enable automatic failover)... This kind of architecture is expensive and maintenance cost can be quite high as well. Now let’s solve this same problem with MongoDB Stitch! Once you have created a MongoDB Atlas cluster , you can attach a MongoDB Stitch application to it and then create an HTTP Service containing the following code: exports = function(payload, response) { const mongodb = context.services.get("mongodb-atlas"); const sensors = mongodb.db("stitch").collection("sensors"); var body = EJSON.parse(payload.body.text()); body.createdAt = new Date(); sensors.insertOne(body) .then(result => { response.setStatusCode(201); }); }; And that’s it! That’s all we need! Our HTTP POST service can be reached directly by the sensors from the webhook provided by MongoDB Stitch like so: curl -H "Content-Type: application/json" -d '{"temp":22.4}' https://webhooks.mongodb-stitch.com/api/client/v2.0/app/stitchtapp-abcde/service/sensors/incoming_webhook/post_sensor?secret=test Because MongoDB Stitch is capable of scaling automatically according to demand, you no longer have to take care of infrastructure or handling failovers. Next Step Thanks for taking the time to read my post. I hope you found it useful and interesting. If you are looking for a very simple way to get started with MongoDB, you can do that in just 5 clicks on our MongoDB Atlas database service in the cloud. You can also try MongoDB Stitch for free and discover how the billing works . If you want to query your data sitting in MongoDB Atlas using MongoDB Stitch, I recommend this article from Michael Lynn .

October 3, 2018

The Leaf in the Wild: Wearable Sensors Connecting “Man’s Best Friend” - Tractive & MongoDB

Leaf in the Wild posts highlight real world MongoDB deployments. Read other stories about how companies are using MongoDB for their mission-critical projects. I had the opportunity to sit down with Michael Lettner, CTO of Hardware & Services and Bernhard Wolkerstorfer, Head of Web & Services at Tractive, to discuss how they use MongoDB at their Internet of Things startup. Tell us a little bit about your company. What are you trying to accomplish? How do you see yourself growing in the next few years? Tractive is a cool 18-month old startup designed for pet owners. We extend the concept of the “quantified self” to the quantified pet, enabling owners to monitor their beloved companions through wearable sensor technology. Our first service was the GPS Pet Tracking device that attaches to the pet’s collar and enables the owner to receive real time location-based tracking on their iOS or Android device. Users can also define a safe zone that acts as a virtual fence - whenever the pet leaves the safe zone, a notification is sent to the owner’s device. We have extended our products to include Tractive Motion that tracks a pet’s activity. Owners can compare how much exercise their pet is getting to other owners with the same breed. The Peterest image gallery enables owners to share images and activity with other members of their social network, and Pet Manager can be used to record veterinary appointments, allergies, vaccination schedules and more. Tractive is currently available in over 70 countries, mainly across Europe and the Middle East, and is now rapidly extending worldwide with our first customers recently added in the USA, Asia, Australia and New Zealand. Please describe your application using MongoDB. MongoDB is our primary database - we use it to store all of the data we rely on to deliver our services - from sensor and geospatial data, to activity data, to user data and social sharing. Image data is stored in AWS S3 with its metadata managed by MongoDB. We also use MongoDB to log all data from our infrastructure, ensuring our service is always available. Why did you select MongoDB for Tractive? Did you consider other alternatives? We initially came from a background of using relational databases, but we believed that these were not appropriate tools for managing the diversity of sensor data we would rely on for the Tractive services. In addition, we knew we would be rapidly evolving the functionality of our apps and were concerned the rigidity of the relational data model would constrain our creativity and time to market. We knew the way forward was a non-relational database, and many would give us the flexible data model our app needed. Beyond a dynamic schema, we had additional criteria that guided our ultimate decision How easily would the database allow us to store and query geospatial data? How well could the database handle time-series and event-based data? What sort of query flexibility did the database offer to support analytics against the data? How easily and quickly could the database scale as our customer base and data volumes grew? Was the database open source? There are a multitude of key-value, wide column and document databases we could have chosen. There were many that could ingest time-series data quickly, but they lacked the ability to run rich queries against the data in place – instead forcing us to replicate the data to external systems. Only MongoDB met all of key criteria – easy to develop against, simple to run in operations and without throwing away the type of query functionality we had come to expect from relational databases. Please describe your MongoDB deployment We run our MongoDB cluster across three shards with each shard configured as a three-node replica set. This architecture gives us the resilience we need to deliver always-on availability, and enables us to rapidly add shards as our service continues to grow. The cluster is deployed in a colocation facility with an external service provider. Our backend is primarily based on Ruby and currently running MongoDB 2.2 in production. We are planning a move to MongoDB 2.6 to take advantage of some specific new capabilities: Aggregation framework improvements such as cursors Geospatial enhancements Index intersection with the ability to use more than one index to resolve a query Can you share best practices you learned while scaling MongoDB? For best results, shard before you have to. Get a thorough understanding of your data structures and query patterns. This will help you select a shard key that best suits your applications. If you follow these simple rules, sharding in MongoDB is really simple. It’s automatic and transparent to the developer. Scaling is of course much more than simply throwing hardware at the database cluster. So we got a lot of benefits from MongoDB tooling in optimizing our queries. During development, we used the MongoDB explain operator to ensure good index coverage. We also use the MongoDB Database Profiler to log all slow queries for further analysis and optimization. For our analytics queries, we initially used MongoDB’s inbuilt MapReduce, but have since moved to the aggregation framework , which is faster and simpler. Are you using any tools to monitor, manage and backup your MongoDB deployment? We rely heavily on the MongoDB Management Service application for proactive monitoring of our database cluster. Through MMS alerting we identified a potential issue with replication and were able to rectify it before it caused an outage. For backups, we currently use mongodump, but are evaluating MMS Backup as this has the potential to extend our disaster recovery capabilities. For overall performance monitoring of our application stack, we use New Relic which is implemented in the drivers we use. What business advantage is MongoDB delivering? As a startup, time to market is key. We could not have got to market as quickly with other databases. MongoDB’s flexible document model and dynamic schema have been essential not only in launching the original service, but now as we evolve our products. Requirements change quickly and we are always adding new features. MongoDB enables us to do that. As we add more products and features, we add new customers. We need the ability to scale our infrastructure fast. Again MongoDB provides that scalability and operational simplicity we need to focus on the business, rather than the database. What advice would you give someone who is considering using MongoDB for their next project? We came from a relational database background and were surprised how easy it was for us in development and ops to transfer that knowledge to MongoDB. That helps us get up and running quickly. MongoDB schema design is new concept and requires a change in thinking - from a normalized model that packs data into rows and columns across multiple tables to a document model that allows embedding of related data into a single object. Developers need to move on from focusing on how data is stored, to how it is queried by the application. You need to identify your queries and build your schema from there. The good news is that there is a wealth of documentation online. The MongoDB blog is a great resource to learn best practices from the community. An example is the awesome post on MongoDB schema design for time series data - this will help anyone managing this type of data in IoT applications. The MongoDB University provides free self-paced training for developers (in multiple languages), administrators and operations staff. There are also some really useful tutorials covering every step of MongoDB replication and sharding . Our recommendation would be to perform due diligence during your research - ensure you understand your requirements, then download the software and get started in your evaluation. Wrapping Up Mike and Bernhard - I’d like to thank you for taking the time to share your experiences with us!

June 2, 2014