Good afternoon. We've got two more to go here in the business track. We've had some great times in the last two days. We've heard many ways that organizations,companies, are exploiting MongoDB to help them think differently, act differently. We just heard a great story about Pearson Education-- how a non-unicorn is acting like aunicorn to drive mobile first. We've got two more presentations.
We're here right now to talk about Medtronic-- the world's largest medical device company. We've got two people with us today who are going to share the story of how they're collecting data from medical devices. This is actually quite interesting, because we're talking about pacemakers. A battery in a pacemaker-- an average device lasts 20 years long-- what happens when that device battery fails? That's kind of life-saving it. A medical three kind of device.
So we have Matthew Chimento who is the project manager of that. He's going to tell us how they've created the system.
Jeff Lemmerman is actually an aspiring data scientist-- a person who curates this data,brings it together for us-- and actually works across the whole stream to bring that data to life.
They're going to share their story-- over two years, of how they've collected this data,and how it actually is being validated. Because imagine, if you don't collect the fact that that battery is going to die-- what happens to the person with the pacemaker. That's lifechanging, it's really important. And I'm really glad they're here to share their story. Gentlemen, thank you so much for being here today.
All right, so. Thanks for the introduction. If you want to read more of what our profile is, you can find it online. So we'll get right into the presentation.
We're going to give you a little intro of Medtronic-- what we do with it, what our particular role in the company is, talk about our challenge. Being medical device, I think we have some unique challenges that some other firms might not have-- with data quality, and data retention policies, things like that.
And then we'll talk about our approach, and how we decided to go and think about using MongoDB as a potential solution to some of those challenges that we have. And we also want to give some fair statements on opportunities, where we think we can go. And some takeaways, some things that we've learned in this process. We're by no means the ultimate MongoDB experts, but I think we have some valuable advice.
So who is Medtronic? Medtronic is, as she said, the largest stand alone medical device company in the world. Being that we are the largest, I like to say that our problems are just twice as big as everybody else, all of our competitors.
We actually work at the Energy and Component Center. Our facility is where we manufacture the discrete components that go into those medical devices that are implanted in your body.
Our facility was established in 1976-- very early on, actually that our founder, Earl Bakken, recognized that for us to make the most quality devices we had to procure and develop components that are of the highest quality.
So we have a few plant stats there, but essentially our building is a manufacturing facility, as well as being the research and development facility for those discrete components, which we'll talk about next.
One of the key metrics that we track is how many patients that we serve. Medtronic this year announced that we are serving a patient every three seconds. So every three seconds, somebody benefits from a therapy or device that we provide.
What exactly are those components, you might ask. This is a medical device. This is a neurostimulator. The cool thing about this pain stimulator is that it's actually got an accelerometer built in, and adjusts the pain stimulation based on the way that thepatient is sitting or laying or moving. So it's a really, really cool device.
Medtronic served 9 million patients last year. We make a lot of these devices. These devices all have certain key characteristics or key components. This first component is what we call a feed through. We make 19 million of these feed through every year. Andit takes about 20 process steps to make that component. And we also do a ton of reliability testing on that component. And that's the kind of that we're actually talking about.
Next component is the defib capacitor. So this capacitor delivers that big jolt of energy in a patient defibrillation-- the internal defibrillation. We make about 3 million of those every year. So again, relative high volume, especially compared to our competitors.
Next component is the actual battery itself-- one of those batteries. We make four different models of batteries. This is what we call a low or a medium rate battery,depending on the configuration. The rate of the battery is a reference to the rate of discharge-- that's how much energy that battery can deliver.
And this is a high rate battery. This is, again, the battery that goes into that defib, that delivers that big jolt of energy if the patient needs it, and they go into fibrillation. This is one of our chargeable batteries. We actually make our own lithium-ionrechargeable batteries. One of the common questions that we get is, if this is an implanted device, how do we recharge it? The cool way that this recharges-- there's an inductive coil. The patient holds a kind of a hockey puck over the device and that actually charges the device from the outside to the inside of the body.
And the final thing we'd like to show is, this is our next generation. This is a fully leadless, implantable pacemaker. The actual size of this pacemaker is about the size of a quarter. It's very, very small, and it replaces the entire device-- a traditional pacemaker.
This is actually in clinical trials right now-- not actually fully released. But last week, Ithink I just heard that we implanted our 77th-- what we call that is the microdevice. So we're very proud of that technology. You can see that is a miniaturized device. Miniaturization has been another key challenge-- a need or a reason that we've had to think differently about the way that we collect and manage data.
So in all, we have about 110 different part numbers that we ship every single day, for the variety of devices that we make.
The question is, where's all this data come from? As Mike Olson said in his keynote yesterday, humans aren't very good at collecting or generating data-- but machines are really good at it. And we have a lot of machines.
In the last-- even three years, I know that we've added over 150 additional electronic data collection processing steps and essentially we just keep adding on more and more. We're accelerating the rate of data collection, how many systems that we have, every single day.
So essentially, a few years ago a lot of the operations that we had were somewhat manual manufacturing. But as we've kind of moved into this big data frontier, we decided that there's a lot we can glean from this information, what we can capture.
Again, we have more data collection than ever before. In fact, in the last two years--40% of all the data that we have, we estimate has been collected in the last two years. So that brings the question-- what is big data, and where's big data play into our business? So we said, well, do we really have a volume problem? Compared to Amazon-- and we've learned, in Facebook, and some other companies here-- we don't have a huge volume problem, compared to a lot of people.
We collect about 1.1 million samples of SPC-- that's process control data-- every day. And about 30 million samples of life test data-- or essentially, reliability data-- about our devices, every day.
So for us, 30 million samples is huge. 30 million samples per day is enormous. But to some other companies, it's not. But it really has made us change the way we look at how we're doing things.
And then going on to velocity-- we've added a ton of different new types of systems, and new data collection systems. So now we're really looking at that kind of mass parallelism system-- where we have more systems hitting our databases every day that we ever have before.
But the serious challenge that we have-- and that's why we have it bolded-- is variety. As you can see, we have a variety of different kind of databases represented on this slide. How many people here are running a paradox database? Ah, looks like only one-- guy on my team. [LAUGHTER]
That's what I expect. So we have Paradox database, we have dBASE database-- wehave all these legacy data systems. And the reason that we have these legacy data systems is, we have a different problem than a lot of companies-- in that because we're collecting medical quality data, we actually have to save our data for 10 years beyond the last implant of a device. So like we said, some of these devices can last up to 20years, which means we have to save that data for a minimum of 30 years.
So as far as we know, we have never actually deleted anything. Which is a challenge,you know. And not only we not delete them, we actually have to keep the databases up and running, or we have to come up with a transition plan to move that data, or migrate it to something that's more modern.
So this is where MongoDB actually gives us an opportunity to kind of solve thisproblem, a little bit. With that though, again, we have some new challenges. In that because we have all these disparate systems already, you can imagine that there's a little bit of a hesitance to say, oh, we're going to throw this new database technology on-- that we guarantee is awesome, because nothing that's come on new has ever had trouble before, right.
So throwing this new technology on top of all these existing legacy systems was a bitof a hurdle for us. So we're going to talk a little bit about how we overcame that hurdle. We want to talk, really, about what we're trying to do here. In our estimation, actually, we spend about 80% of our time and effort right now in data wrangling, which is grabbing data from these disparate data systems. And then we spend about 20% of our time doing data analysis. And we really actually see ourselves as a heavy data analysis company, in some ways. We're really trying to gain information from this data.
So if we're spending 80 over time just doing ETLs, and 20% of the time doing analyzing, we're really not getting what we want out of the systems that we have. This is actually a data framework that we've come up with, that's kind of guiding our particular facility. We have a number of legacy systems and actually, the one right in themiddle is TestVault. TestVault is our platform for collecting data into MongoDB.
But really, all these systems we want to feed, I guess, our platform of predictive analytics, big data management, and data visualization. And we want to transfer all this data through these systems into knowledge-- because ultimately that's what's useful for us making decisions, how we can make quality decisions that guide us every day. So with that, I'm going to transition to my partner, Jeff, who's the technical lead. And he's going to talk about our technical solution to some of these problems. [APPLAUSE]
Thank you, Matt. This morning we heard about the fourth paradigm-- what is called data-intensive science in Jim Gray's book, The Fourth Paradigm. I've actually read this book. I recommend it to those of you who work in science, are kind of interested in theways that science is evolving to become more data-intensive.
As I was reading this book, I really felt that I was living this transition to the fourth paradigm. I joined the battery research group at our Energy and Components Center eight years ago. And at that time, my manager and his manager had the vision of a data-- it wasn't called data science at this time. But they had the vision of someone embedded in their team, in their research, and in development team, who had the skills to pull data together and help them with this analysis.
And they actually called me a software engineer data visualization specialist. And I call myself an aspiring data scientist, because I'm trying to acquire the skills necessary tochange my title to software engineer data scientist. I have some places to improve on in the statistical areas.
But I'm going to tell you a little bit about what I'm doing on the data wrangling side of things today. I spent most of this eight years putting a Band-Aid on our existing systems. And you can see a lot of them along the bottom, talked about a few of these-- automated equipment that are producing data, and datasources that are not very friendly to work with. He talked about Paradox, and dBASE III.So not only do we have to keep the data for a long time after these devices are implanted, but we actually are still using systems that are writing data in those formats.And the only way to get rid of that data format would be to replace the equipment that's doing that.
And we're in the process of doing that, with some of the projects that Matt alluded to.But it's not a simple thing to just go in and replace these systems that are fully developed, validated, tested, and they've been in use for many, many years. I'm sure that a lot of you have some of these same challenges. Bringing in a new technology just makes that difficult, also.
So I'm in this unique position. I work outside of the IT organization at Medtronic-- I workin the research and development group, like I mentioned. And so I have a unique opportunity to kind of explore new tools and technologies.
And I really consider this to be part of my mission-- to help our scientists and engineers generate this knowledge and really drive that burden toward zero. You know, Matt showed that 80-20 graph-- I consider it my job to drive that burden toward 0% of their time finding, and organizing data.
So I started thinking about how I would design a system to do this from the ground up. Just for a few minutes, let's forget about all these existing systems and these challenges. If you could start from the beginning, how would you design test equipment to write data, manage that data. And I think that although this is almost never possible, it really helps you form a vision for what may be possible. And then you can kind of come back to reality a little bit from there.
To start designing this system, it's important to think about how the data is going to be used. You've been hearing this in a lot of talks, throughout the course of the conference. How the data is going to be used, I believe, is really the most important thing. And so at MACC, we have kind of this four-level model of how the data is used.
In the upper left you can see we have operational dashboards kind of monitoring the manufacture of these components, how that's going. This might be directors and managers looking at this to see how their value streams are producing components,and how things are generally going-- the yields of those individual components. A little bit below that, we're actually doing statistical process control on individual steps, maybe, within the process to build that component. And you might be plotting one single parameter over time, and kind of looking for general trends. You may have set up control limits on that trend, and when you want to get notified about it. And so that's a level below the dashboard, but again it's somewhat simple in that you're kind of monitoring one parameter over time.
The logical next question would be-- if something's going wrong in that statistical process control chart-- why is it doing that? Why is it trending downward, or upward? To get to that, we start talking about this multivariate analysis, and what I refer to as ananalysis-ready data set.
And that's kind of what is depicted in the lower left there. Where now you're looking at,not one parameter as a function of time, but you're actually trying to compare two different parameters. And there we're actually comparing it to one of our discharge models for a battery. And so it's somewhat more sophisticated analysis. And looking at that multivariate approach. A layer below that is the actual measurement data coming off that equipment-- what werefer to as waveform, or time series, data. And that's what's kind of showing in thelower right there-- is a battery going through a pulsing test, where you can see it's measuring voltage through that test.
A use for this data may be that you're just testing that the battery can actually do this pulse, and then we want to store just the minimum voltage from that first pulse. But folks in my group are actually developing very sophisticated performance models on how the battery is able to do this pulse-- and actually to find that exact shape of that waveform. Again, before you can design a system, you need to understand how it's being used,and really think about it.
So for the rest of my talk, I'm going to talk about the holistic view of the componentsthat we manufacture, as one use case. And the second is this time series data usecase, which you see in the lower corner here.
I started thinking about how you're going to store this data at the lowest level of this infrastructure. And my mission was, bridging the gap between how the data's stored and how it's going to be used by these various scientists and engineers. And I thought well, one way to do that would be to store the data directly in the toolwe're going to use to analyze it. Has anyone in the audience ever used Excel to analyze data. All right.
When I first started in the research group, eight years ago, some of the first tools I was asked to create were, how can I get this data into Excel. That's all the scientists and engineers wanted, is-- just give me the data in Excel, and I'll take it from there. And in some sense, you know, that's great. Excel is very powerful. It gives them ultimate flexibility to kind of work with their data. And you know, it's everywhere, and very approachable for those end users.
So for this hypothetical case, let's just say we're storing all the information about the components we manufacture in Excel. Well, we have part number, lot number, and serial number. All the components we manufacture usually have those things, and that's kind of where the similarities stop. Matt was talking about batteries, capacitors, feed throughs-- we start to have very different things we want to know about those components. And I've shown here just a simple case of a battery that's going through one step of the manufacturing process. We actually fill that battery with electrolyte. And things we might want to know about that step would be, when did you fill it with electrolyte? How much electrolyte did you put into it? Which lot of electrolyte light did you put into that battery?
You can start to see how this would quickly grow to be many, many columns. And then handling this variety that we've been talking about would be a challenge when I have a different component going through very different manufacturing steps.
Besides those challenges of how you would organize that data in Excel, there's theobvious things like, how's the data getting into Excel? Scalability, and multiple users,and how do I get past the row limitations of one million rows, and all those things that you obviously would think of, if you're using Excel to store this data. So Excel is probably not the best choice for your enterprise data warehouse. And we know that there are solutions out there to handle those types of things. But fornow, let's think about what if Excel was at least the report-- use it as a model for the report format for our end users. We could, at the time the users are wanting to get this data about components back, go out to each of these discrete data sources. So we have the equipment generating text files, Paradox data, SQL, Microsoft Access. We'll just leave that data where it is. And we'll pull it into some type of a report that the users can then consume, that looks very much like that Excel-- kind of, each row is acomponent, and then what you want to know about those components. Well batch reporting, and things like that-- that's not a new concept. But what happens when a connection to a data system doesn't exist for your reporting tool? Or what happens when you want to ask different questions than the reporting application wasasked-- or that it was created for? So you might create a report that runs Monday morning, that gives you the last week of components that were manufactured. But now someone comes to you and says, what about the last year?
Or they want to look at a bigger picture of components, to do some type of root cause analysis on why their SPC chart was trending. And they want to look at much more data than was in your report. Now you're saying, OK, well I need to tweak that report alittle bit. And I'll let it run overnight, or in a few days, you can have your report and startanalyzing it. And that's kind of the situation we were in, a little bit-- was doing this reporting, going out to these discrete data sources.
So another option I mentioned would be to do some of this work ahead of time, andload that data into a central repository. Your classic relational database system,enterprise data warehouse. And now we're developing all these ETL processes that aredoing this loading at night for us and kind of having to develop and test all of these processes, and schedule them, and support them. And we're all aware of the difficulties there.
And then, you have to actually design the central repository. And we have heard aboutthe challenge th-- I kind of use that Excel picture to help people understand how difficult it would be to design that, even if you started with a one-table database. You can see how it becomes very complicated, quickly. And it's a very fragile process, as things are changing-- you could kind of break down this ETL process.
We've taken our simple Excel table with all of our component information in it, and we start doing kind of nasty things to optimize it for storage in the central repository. We've broken it out into four tables, just to store some of that simple information about our components.
And as you can see at the bottom there, our queries can quickly become very complicated. And I actually didn't break this out as much as you might, if you were actually designing an enterprise data warehouse. I've kind of built in some flexibility here, by using strings for attribute name and attribute value. I've built some flexibility into that table, just using string data types. But that kind of limits your ability to pull data out, because now you're not able to treat those string values as real values in your queries. And so you're already somewhat limited in the questions you can ask. And you're putting a lot more burden on your application to then translate that data back into how it's going to be used. So like I said, I felt that my mission was always to be looking for new tools that can help us do our job better, and really drive that burden towards zero. So about two years ago,I heard about MongoDB. And I thought, well, I'm going to download this tool, give it a try, and see if it might be able to help us solve some of these challenges that I've been thinking about.
And I kind of came up with this vision of, what if I could just load every component we manufacture directly into MongoDB. And again, I realized that this wasn't really going to be possible-- to go out and replace all of the equipment on our manufacturing lines,and kind of change them to be sending components directly to MongoDB. So to do our proof of concept, I used MongoDB as a way to go out and do this data wrangling process. So my proof of concept basically consisted of choosing one battery model that we manufacture. I decided that I would say when that battery goes through electrolyte fill, which was example I showed earlier. I want you to go out and find everything you can about that component. Go out to all of these systems, find everything you can about that component, and then load it into MongoDB. Then you have to worry about things like, when is that information updated, and how is it retrieved. But this was a very simple place to start. We'll take one of our components, one of the manufacturing steps, find everything about it. It was kind of a trigger to loadt his into our database.
All right. So I've created this data loading service. And I've kind of done it for one component. But I still kept holding on to this vision of, I kind of want to do it directly into MongoDB.
And I actually have a vision for where we may be able to take this, which is a device-level view. Matt was talking-- that a device can have multiple components. So a logical extension of this component view may be to go to the device view. But now we're talking about extending my vision outside of our facility, and that's going to take some time to really show the proof of concept there. But there's another way that it could be extended.
So what does the data look like, once it's in MongoDB? I'm sticking with this simple example of a battery that's going through electrolyte fill. So we have one component,going through one manufacturing step, and the facts we care about about that step. And I've modeled it in a JSON format. And then I could put this directly into MongoDB.
This was not the first version of our component schema, but it's pretty close. It's a somewhat natural transition for me, to think about the things that would be at the highest level in that document. You can see there are some of the things I said that every component we manufacture would have. You can see that-- the lot number, partnumber, serial number-- those first three columns from my Excel table.
And then you start to get the variety built in. Very different things go into a battery that go into a capacitor. So I thought, well, we'll just have a collection of subcomponents.And in this case, we would have electrolytes being one of the subcomponents that went into the battery.
I didn't show it here, to keep it onto one slide. But you can see there's kind of acollection of subcomponents there. And then we found a way to abstract the production step to be a step name, a step number, when it happened, and that kind ofthing. And you can imagine that there would be many steps that this component goes through. And then you can see the actual data. Now the difference here, between our relational database approach where I was storing strings, is I'm actually able to store these factswith native data types. So you can see I have a filled voltage-- and it's actually a number-- and I have when that happened-- and it's a datetime. And I have a string in there, as well. And I have some other information about this component.
This gave us a lot of flexibility. But I will caution you, that we did have to put quite a bit of thought into what we called things. Especially in that facts section, where we were building in flexibility. So you could have one battery call it electrolyte fill, and thensomeone else decides to call it "e-fill". And so you do have to put some thought into what things are going to be called, and how you're going to control that-- if you'regoing to do it just at the application layer, or if you're going to develop some internal standards for, this is what it's called, and please call it this in your data.
And that's really important, because you're trying to facilitate analysis across components. And you really want simple, fast queries. Where instead of doing those nasty joins that we saw in my relational example, I'm able to find the complete historyfor a component with a very simple query. I happen to know the serial number that's put on that battery, I want to know everythingabout it. This is an example of what that query might look like in Mongo. And I get that complete history back. It all kind of comes back with that component. So yes, that is very exciting.
Again, the names being consistent across the components is something we've thought a lot about. The different battery models, trying to standardize it wherever we could. Maybe someone wanted to look at electrolye mass across all 30 batteries we manufacture. Well, if the names were different, you can see how some of that might be challenging. And again, it's optimized for retrieval, where in the relational approach were kind of optimized for storage. Our use case was, we wanted to be able to find this complete history for a component as fast as possible. So we really optimized for retrieval, and getting all that information back. Another area that was challenging for us was, what goes into this component document, and what does not go into it. I think that I've heard a few examples of this--with the customer example, of creating a single view of the customer. Where they were getting clickstream data for that customer, and they decided to keep that out of the customer view. You could imagine, that would be a lot of data. That's kind of the analogy to our time series data for that component. And where you might store how many clicks, or the click activity for the last 30 days-- like a count ofthat, an aggregate-- into that customer view.
We might do the same thing with our components, where we have the time series data separate-- where we might be storing it at 1 millisecond resolution, doing this waveform. But we might store some summarized information about this waveform in that component view. So we might store the minimum voltage during that first part of the pulse, or the last voltage. So we have that in the component view, and we have ourtime series view also. So those are kind of the two use cases we've been talking about. And how does the time series data get there? We've been able to develop a RESTful API to enable this. And we're pretty excited about some of the success we've shown with getting time series data into MongoDB, as well.
Matt and I were able to do a ton of benchmarking to compare this approach to others. And so I think that I'll welcome Matt back up to summarize some of our experience with Mongo.
Thank you. [APPLAUSE]
All right, so. Again, we wanted to give the fair rep of our experience with Mongo, and some of things that we found out. For us, consulting and training was excellent. So we actually had MongoDB on site for a full week of personalized training. And that was great for two reasons. The first one, I think, was-- obviously, it was a great learning experience. But the second one was, it was an opportunity for us to engage our IT people-- our ITgroup, and some of our other groups-- directly with the folks from MongoDB. That earlyengagement really helped with the acceptance and adoption of new technology. And we even had desktop support people included in the training class. There was no limitation to how many people we could have-- or at least, no real unreasonable limitation. So having those people involved was really, really helpful in gaining acceptance. The support agreement actually is fantastic for us. I think we've actually underutilized it. We're medical device-- we move slower than MongoDB does. They clearly move very fast, they're releasing things fast. We really think what they offer is fantastic. And if you get the opportunity, definitely use it. MongoDB monitoring service also was another challenge for us, in that because we're medical device, we're very cautious about our data. So sending that any data or performance data out to the cloud was something that just as soon as we mentioned it,was a no-no with our IT. But they do have a solution that you can host onsite. So that's what we're working on implementing now.
I thought it was mentioning the MongoDB certification. Clearly, MongoDB has been thinking about what it takes to make this a legitimate, acceptable, enterprise solution. Knowing that there is this certification opportunity adds to that legitimacy for Mongo.
But again, there were some gaps. You know, we talked a lot about the enterprise acceptance. And I think you can see from this conference, everything that MongoDB is doing. They've actually really closed a lot of those gaps-- with the enterprise support contracts, are a huge part of that. Like I said, the training, the conference, all these different things really add to make sure that companies are a little bit more progressive-- can jump in feet-first and accept it.
So some of the things that we actually did were, the benchmarking, the white papers. I mean, those proof of concepts are always going to help you. Integration with off-the-shelf reporting-- again, you can see from the conference center that that's changing, too. So that's something that we're not really even concerned with, going forward.
User interface for MongoDB cluster management. Again, also something that was just launched, it seems like. That automation, I think, is going to be huge for us. Jeff or I,neither of us are huge Linux gurus. Implementing our first replica sets, and setting up our environment was not super-trivial for us, actually-- unfortunately. And then actually I'll just go to the last one, which is the Part 11 compliance for audit tracing. So again, that's something that was a huge, huge barrier for us, is that we have to track everything that goes on. We're not allowed to delete data, we're not allowed tochange data. So audit tracing and compliance was a huge issue for us. And MongoDB was very, very helpful in helping us get over that hurdle and address-- or look for possible solutions to that. That is all I have. If anybody has any questions-- [APPLAUSE]
I just want to say, thank you to you guys for sharing your story with us. It's really great to hear such things-- medical devices, where you need to keep history of everything, forever. 20 years ago we weren't even thinking about capturing data and never deleting it. And now you have to replicate that and use it forever. And that helps all of us. That's really amazing. Thank you for sharing that.
Questions? We've got like three question times. Test up. Ryan Being so. I'm in healthcare as well. And so as I'm listening to you guys, I'm reflecting on our own journey, two-plus years ago, with MongoDB. So one of the things is-- very similar patterns, doing a little bit of asset management, time series data. How has it grown, now? Have you moved on from one battery to other components, have you been able to expand that?And how was the acceptance from your other operational reporting teams? You know,were you providing other tooling for the analytics? Did you build your other, like, APIs,so they can interface into them, like-- I'm finding that challenge now. And resolving it inmy own ways, but just kind of curious. One is kind of, how have you expanded? And then, how have you solved some of the reporting challenges, as you move out of the relational ecosystem and all of the tooling ups there?
Well, I think that's a great question. We have expanded to other components. Currently,basically all the batteries that we manufacture. I'm also currently working on one of our capacitor models. So that has happened.
The facilitating with the rest of the environment is basically creating that intermediate analysis-ready data set. So creating that report from Mongo that works with our existing tools. Whether it's your Jump, Spotfire, Tableau-- those types of tools, standardizing that intermediate layer. I'd love that direct connection to the data. We're definitely investigating those possibilities. But that's kind of where we're at now, is creating that intermediate layer. I think we have time for one more question, and then we've got to break and go forward, yeah.
OK, there's one here. Thanks.
What tool did you guys use for reporting with MongoDB?
Yes. So that currently is a custom application that I created to define which components you want to know about and then write out, basically, that complete history. But basically flattning out that structure that you saw in the JSON document to this analysis-ready data set that can be consumed from the other existing tools. So that's a custom application I created. But we're looking at other existing ones, and go from there.
OK, one more.
Oh, got one back here.
He's got a mic.
So did you create a messaging layer with the RESTful API to MongoDB? And what was that?
Yeah. So actually we did. We created a messaging layer with a queueing system to take in all the data from different systems and do the translation before we put it into Mongo.
What was the technology used for the messages?
Actually right now it's on Microsoft Messaging queue. And I guess, we're looking attranslating that to Service Bus.
There was one more over there, was that what you were saying, yeah? OK. Again,thank you very much, guys. It was great. Really appreciate the time. I'm going to ask you to fill out your surveys at the end of this.
In ten more minutes, we have one last presentation-- a really great story about starting from ground zero and building an application for messaging. Social messaging in India,from 0 to 15 million users in nine months. It's a great story, please join us. And we'll have a wrap-up at the end of all the great tidbits I picked up through the last two days.So you really want to hear it. Thanks for being here.