BlogAnnounced at MongoDB.local NYC 2024: A recap of all announcements and updatesLearn more >>
MongoDB Developer
MongoDB Developer Centerchevron-right
Developer Topicschevron-right

Build AI (into your) apps with MongoDB and SuperDuperDB

83 min • Published Jan 08, 2024
Facebook Icontwitter iconlinkedin icon
Rate this video
star-empty
star-empty
star-empty
star-empty
star-empty
search
00:00:00Introduction and Background
The video begins with introductions and a discussion about the career paths of the co-founders of Super Duper DB. They discuss their journey in creating SuperDup and the challenges they faced in the field of AI and e-commerce.
00:13:58Development and Challenges of Super Duper DB
The speakers delve into the development of SuperDup, its initial challenges, and how they overcame them. They also discuss the unique naming of the tool and the backgrounds of the founders.
00:27:56Super Duper DB and MongoDB Integration
The conversation shifts to the integration of Super Duper DB with MongoDB, the benefits of MongoDB's Atlas Vector Search, and the potential for more sophisticated AI applications.
00:41:54Technical Walkthrough and Demonstration
The speakers provide a detailed walkthrough of creating a chatbot using SuperDuper DB, MongoDB, and OpenAI. They also discuss how they use Atos Vector and MongoDB for data processing and machine learning tasks.
00:55:52Super Duper DB Features and Use Cases
The discussion moves to the key features of Super Duper DB, its ability to handle external data references, and its potential use cases, including video search and audio search.
01:09:50Future of AI and Closing Remarks
The video concludes with a discussion on the future of AI, its potential applications in various industries, and advice for those considering starting their own technology startup. The hosts also encourage viewers to experiment with MongoDB's Atlas and SuperDuper DB, and to participate in their community forum.
The primary focus of the video is the integration of AI with database solutions, specifically through the use of Super Duper DB and MongoDB.
🔑 Key Points
  • Super Duper DB was created to bridge the gap between databases and AI models, making AI more accessible for companies.
  • MongoDB's Atlas Vector Search is a highly performant native vector search engine that works well with Super Duper DB.
  • Super Duper DB can handle a variety of data types and can be used to build a wide range of applications.
  • The future of AI is predicted to see a rise in reinforcement learning and its potential application in robotics.
All MongoDB Videos

Full Video Transcript
[Music] [Music] [Music] [Music] [Music] [Music] welcome everybody it's a Thursday so as ever it's another exciting episode of mongodb TV Cloud connect I'm Shane McCallister I'm one of the lead developers Advocates here on the developer relations team at mongodb T mongodb and mongodb TV indeed I host this show every Thursday and it's on YouTube and Linkedin so for regular viewers it's really good to have you back as always and for new viewers you are very welcome do check out our past shows on YouTube or LinkedIn and look out for forthcoming episodes as well too while we're waiting to get underway do let us know in the comments on LinkedIn or YouTube who you are and where you're joining us from it's lovely to hear that all the time this is been recorded I've seen some comments already on that so it will be available afterwards both on YouTube and also in the LinkedIn mong's LinkedIn and the past events so if you can't stay for all of it don't worry about that it'll be there for you to jump back into and review and always if you're new to this please don't forget to like And subscribe on our YouTube channel or follow us on mongodb LinkedIn so with all that housekeeping out of the way let's get on with the show today's episode we have the privilege of delving into the realm of groundbreaking databas Solutions and AI integration with the co-founders of super duper DB Teemo and Duncan super duper DB isn't just a name it's a promise of exceptional database capabilities coupled with Aid driven insights and addresses industry gaps and challenges Teemo and Duncan have embarked on a journey to redefine how databases interact and Leverage The Power of artificial intelligence and their platforms integration with mongodb's own Vector search technology is reshaping the landscape of database functionalities and harnessing the potential of AI while optimizing the robust capabilities inherent in mongodb TV and mongodb not the TV piece on today's show we're going to explore all of that the Genesis of super duper DB their mission their Innovative strives and how they seamlessly integrate with mongodb and to help me in this too I'm joined also by Mark Smith my colleague on the developer relations team here H so without further Ado let's get everybody onto the stream Teemo Duncan Mark you're very very welcome good to see you hi guys ni everybody excellent Mark um first of all I know you have your own show uh with us on Mong be TV why don't you give that a quick plug and give yourself a little bit of an intro to our audience and then we'll move on to teamone Duncan oh fantastic uh hi everybody I'm Mark um I'm a developer advocate here at mongodb um I I started my coding career in the 90s uh as a Java developer and then switched to python around 20 years ago um and then switched to developer relations around eight years ago uh I've yeah um I run a CO I I run a a live stream on Wednesdays at 2m UTC uh um on the mongodb uh YouTube channel called code with Mark at the moment we're building an odm in Python using a whole load of kind of magic magic python stuff like descriptors and meta classes and things to sort of hide what's going on under the hood it's it's a lot of fun and it's kind of intermediate almost Advanced python um with yeah with with with some mongodb flavor in there as well sort of working around design patterns and things excellent good good plug good plug it's great to see so many people join joining us in the comments there from loads of different places around the globe it's amazing to me always the I suppose the geographic nature of our viewers given time zones and everything else we try to position this show usually um with regard to you know it's I'm based in Ireland if you can't hear from the accent H so it's 500m here um and we try to cover as much of the globe as possible so thank you to everybody joining from Turkey Norway Italy Nigeria Germany it's great really really good to have you so moving across to our super duper DB friends and and that's a name and I'm GNA have to ask you where that all came from other than that being a super catchy name but prior to that Teemo first and Duncan afterwards Teemo how did you get here what was your career path before founding super duper DB so I I um I by accident kind of became a Founder so I didn't inter internship in my masters and I was kind of fascinated with the idea of you know being building a business and that back then there was an ech which is funny because it's like a very weird uh environment in World which was new to me but uh when I did that internship I thought there are many ways of of improving the business right and how to you could do it better and I ended up actually copying it that that exact same they went bankrupt to to my defense but then I uh kind of implemented the whole thing together with two I could say friends pretty much and then yeah that was uh actually was a long journey was over eight years um but it was super successful it was a lot of fun primarily so pretty much during my earnout then I I actually always already had like another other ideas and also the ambition or the desire to do something right after that and that actually I did together with Duncan so so basically Duncan and I we had a company afterwards in which was in e-commerce AI so we started in 2018 actually to develop um uh search uh search and recommendation solutions for online shops and back then all that was also uh a lot of these Services were were based on multimodal vector embeddings and it was pretty early on starting in 2018 but we ended up also um kind of giving that that whole IP and Company away and and why we did that we we developed own internal tooling which now super dupb is deriving from and the name um pretty much was a working title like in the very early days I mean originally the concept originates with with or comes from Duncan but then we were like thinking about what to do next and so on and kind of the the concept kind of was getting more and more concise and then yeah it was just the idea behind it is to to to make your existing database super duper to transform to an AI uh development and deployment environment right and then kind of in that narrative uh we just at some point you know when we then started it's very hard to find something which is not fully occupied where have a domain and rep and and it's a funny name and then and also my experience from a previous uh company it's good to have a catchy name right if you know there's substance behind it and serious stuff then then why not have a funny name right yeah know that makes sense and look Falls in line with kind of mongodb which is is short for humongous right and that's what our Founders were trying to address at the time the problems with you know super large databases at that time but before we go there and before obviously you hooked up with Teemo Duncan what was your path to before the two of you met and started to build things together yeah um yes so thanks by way first of all for having us on here and um yeah great to connect um so yeah my path was um I studied mathematics um and at that time you know math was was essentially all done with pen and paper so I was sort of late to the party with development and when I finally learned to develop um in my master studies um I found a lot of you know things very difficult to get started with so for instance cloud computing um you know getting your development environment set up and so I was always on the lookout to so try and make my life easier um exactly so I got my PhD in in machine learning AI um and was in you know post doctoral research for a bit in that um domain in uh neurology at one point um exactly and then made my way to sort of uh industrial R&D um meets academic research um and was in the company that was in the beginning of its AI Journey um there was Big Big E e-commerce company in Germany um and despite its size it was super in its infancy in terms of how to leverage compute and data and use intellectual property essentially developed in-house in a productive way and um it was very very difficult and that to get anything done you essentially had to jump over hurdles and try to find the simplest design pattern and simplest setup to get what you wanted done done and and that was how essentially this thinking sort of in the direction of your data stor is sort of the center of any um productionize AI deployment or many Ai deployments and ideally like if you're doing something like um AI search so where you're searching for meaning it would be nice if this was just in your database so you know until um recently uh you know the smart thing to do if you wanted to do search was to do something like um you know elastic search where you have essentially it's just a database query but it has some cool functionality that allows it to be robust to uh spelling mistakes and and get some sort of fuzzy matching and as things have transitioned to AI uh now what you would like to do is to to actually search with the a human readable query and find the things in database which somehow correspond to that and the technology actually still hasn't gone there what it's gone to is essentially here's a vector which describes your thing and then we can provide you in some databases or some new Vector search engines a functionality of comparing a vector with other vectors but that's not really what I as a developer want what I as a developer want is to upload my thing which is interesting in my application or program for instance an image or a question and I want to com in my database somehow compare that to other things and to do some kind of search or ask a question about my data and so really have to close a loop between the database and the models and the models live in a completely different new ecosystem than the databases and so that's how we got super dup DB which is to provide an environment which encapsulates not just databases but also um the other side which is basically AI model developed in Python which is the reality of of AI development nowadays and and that's exactly what we have so yeah exactly so Teo described the the journey in terms of you know the entities that we set up to to basically provide this functionality so we're now um open source on GitHub uh we're publishing on pip so you can pip install us and connect with a variety of databases among which is mongodb so okay excellent well look I I and we're going to get into it and I know you've got a demo to show um I'm a bit perturbed now to hear that you've a PhD in machine learning I'm just going to throw out the rest of my questions on this live stream because they're not going to go anywhere near the level of knowledge you might have I'm only going to scratch the surface you gave us a good overview there Duncan of the challenges that you were trying to address um I suppose in terms of the current difference differentiation that super duper DB has over you know other Solutions in the market or other ways to do this could you elaborate on that a little bit more you were starting to get in there and I think I'd like to just dive into that a little bit more if I could yeah right so there's you know Tima maybe you want to take that yeah I can maybe break it down first make more easy like for yeah that would be superb because we we have a very broad audience and you know when we especially now when we're throwing out you know Vector search and Ai and machine learning and embedding so I I think you know I would love to think we've got repeat visitors coming to this show and they've learned from previous AI shows but you know Teemo bring it back up a little bit for the moment in terms of what you know what vectors mean in terms of data and how that's super duper DB is enabling what you're trying to build at the moment exactly I'm going to start even before that right you need to provide uh more context so the problem is with with like building machine learning and I would understand Vector search as an an AI application which is kind of part of that there's this Paradigm that you have to bring your data to the AI Silo right so you have two different two SE or you had two separate silos right let's let's not look at Vector search for a second and you know this is very complex because you the the whole um mlops um uh domain kind of you know bringing machine learning to production right in an environment where data is constantly change changing uh this this whole um uh yeah domain relies on so-called pipelines so you basically bring your data to the models by by basically building complex pipelines and so on right and now Vector search came up uh and then now basically Vector search dedicated Vector search databases introduced so again you have to basically take the data out of the place with originally resides you know you better test the database mongodb potentially whatever it is and then move it around into these different places and nobody wants that because it's very complex and it's also a good reason why why so many people have or so many companies uh have failed to to adopt AI because there's beautiful stuff out there right so now you have MB with uh that that exactly supporting our Theses that should be everything should happen very close to the place where the data is because all AI is based on data right so so I mean it's just just a fantastic move that Atlas introduce Vector search and we see it now right is the second most or most used Vector database by now and we love it right by the way we've been in our previous company working very good glad to hear always good to hear yeah we because you know we when we started super dupb we've been building it on what we've built at our previous company and there we would only developed and and have a whole system running on mongodb so the the initial version for a long time basically everything was designed with with nosql in mind so we have this pythonic abstraction now we of course Support also other databases as D mentioned but really this is is exactly the perfect setup right you have a native Vector search engine right on your database which very performant but still is the question that remains you know how to get where to get the vectors right and again like with other Solutions you would again set up a pipeline which basically sends your data to an embedding service which could be open AI or could be hosting your own hugging face model to create the embeddings and then you retrieve them and you store them back into mongodb so it's super duper DB it's it's you install it onto your MB instance and then you take an embedding API or you save for your own Hy face transforms but really work with the same simple python command and then you just need an additional database query to Define on what data you want to run the model and and and what for what data you want to create embeddings and then it's done right then you have the vectors in your database to index them and expose them and use them with atas Vector search right and then that by itself is an amazing uh you know it's amazing workflow which is so easy and and and and and convenient for developers but then it goes further right because we we're not here to solve Vector search it's really we're here to to first of all of course enable classical machine learning use cases classification segmentation recommendation uh regression and so on where you just take a model let it run on your data and it's pretty much done but when you look at more sophisticated AI applications which involve like several models to work together right in is kind of The Next Step you need vectorized data and then you kind of run a query compare the vectors and then send the context information to a large language model you already have two vectors in the process right and you have again like if you have changing data constantly you have different pipelines which you don't need with super dupb super dupb it's two uh commands two configurations it's set and you have no additional overhead to keep this deployment alive and no boiler plate you know prototyping it in your notebook and then shipping it bring it into production and um this this whole concept of in database AI was actually not initiated by us so this is not an invention we we had so there's other great companies we can name them icvs is here to mention right starting they started in 2017 others follow postgress now you have Big M on several Solutions but they're fundamentally different in the way that there are SQL or SQL first so kind of they follow this this Paradigm that kind of uh SQL or SQL dialect is kind of the operator to use and implement it we very ponic and we have this simplifi interface which allows uh developers to to kind of replace writing you know really thousands of of of lines of gluc cod to connect different components to replace that with a simple pyth interface which is you know very easy to to adopt and to use but at the same time being able to drill down to any layer of of implement implementation detail and to bring pretty much full power of the Python ecosystem to leverage it for for their workflows you know really being able to to of course enable these single model use cases very uh um efficiently and flexibly but also to to implement even the most complex I wouldn't even call it AI use cases more like AI powered software products if you you you make it sound incredibly simple and elegant and that's obviously the end result of where you are today what were the challenges like to me everything that looks simple and elegant is next to Magic right and so you obviously had a ton of challenges along the way were there any but any of those that you particularly want to highlight you know that how you went about it how you might have solved it to get to where you are today with the Elegance of super duper DV solution I think D should answer that because you know this is goes way back right it's not just the years now we we spent here at super duper D but previous compy was of several years over four in total but actually than you might take it because it goes even back further yeah so I guess the the biggest challenge was to um provide the right level of simplification without becoming too simple and that would be I guess our our the criticism which we would level against our competition is that they may have tended to go too far in the direction of the simplicity at the cost of flexibility and that's something which is I think going to be very valuable in AI going forward because it's moving very very quickly and the models are changing form and you know architectures are changing and and and to be able to basically um to move with with that those changing tides is going to be important um exactly so to do that one piece of course was to make sure that we had sort of python classes or models as like first class citizens in in our framework and and to to get that working um we you know we had to look for the right design patterns that enabled that so um that was one side and the other side was to make sure the thing was performance so to connect this with the right um parallel compute technology on the other side so we're using Ray which is a great um open-source parallel compute um technology focused on AI uh in the ecosystem so we're integrating with that um and and when I say the right level of abstraction so on the other end of spectrum you just have essentially an arbitrary task execution um software so which which we're not so you might think of an Apache airflow or you know some kind of cicd tools where you can stitch together any sequence of steps to do anything you like you know start an instance on AWS um do anything you like on that instance pull some credentials in and once you get into that domain rather than being something that you can you know easily leverage to to get your workflow working you need to you know essentially start a project and take a lot of time to understand the tool and what you want to do and fit it together so we didn't want to go there so so that that's the opposite end of the spectrum so we're somewhere in the middle where you can really do a lot under the provisor that essentially you work with our um our abstractions um but you can really do a lot so I'll show you in the demo afterwards know you can actually pull videos in from the internet turn them into images um index the images and then search through the videos in a in this one abstraction so it's it's very very powerful um but at the same time it's simple enough to to restrict the code that you need to write to really a very few simple commands yeah good does that help does that help you with your question no it certainly does and I suppose look you you touched on it a bit too Duncan and and Teemo before I suppose given your history in this space and given how swiftly everything is moving you talked about you know Teemo you talked about the you know how Troublesome the the pipeline the ETL the extract transform moving data around putting it into embeddings getting it back and then you brought up one of our favorite mantras which is data access together should be stored together and that obviously is a big Mong be thing and that's what Vector SE allow your your vectors are in alongside the data back in Atlas but how you know it's a constantly moving space whenever I'm joined by AI companies in the AI space I always ask them the question how do you keep up keep ahead um you know see around the corner as to what's next because obviously this is a fast moving space and you guys have been in it for a while so going back to kind of my question about initial challenges did you uturn did you reorientate at any stage during the development of super duper DB seeing what else is going on in in this space I mean from from my perspective really is the is this unique position positioning because we're kind of connecting these different silos and ecosystems we're in a spot which is isn't really changing much right there's so much happening on the API side on the on the L side on the framework side you might look at Len chain on Lama index which is dominating but we really in that spot which which is kind of there to to um connect and and and allowing developers to to um leverage all these Solutions in in the AI domain with the existing data infrastructure right so from my I probably we did pivot or or change a lot on the on the deep down technical information details but the fundamental concept didn't change at all yeah yeah I mean I would say so so so yeah the concept just stayed the same for a while and the reason is the following that you know deep learning and so machine learning has actually been around for much more than a couple of years it's been around essentially since you know neuron networks were a thing years and decades ago um and one thing has remain constant I'm getting some kind of feedback from someone sorry maybe T you can mute for a second I hear a little bit s there we yeah so so one thing is to remain constant which is that AI models are these like mathematical functions that do many complicated things and that stayed the same and and we've got better optimizing them and the thing which has led to the revolution we have now is this revolution in comput and the right types of neural network to kind of use that compute efficiently so that's like you know gpus plus this Transformer architecture and now what's happening is different companies and research institutions are using these um these two things in different ways that do different things but Underneath It All it's the same thing it's you get some data from this turn it somehow into some numbers transform it with these big neuron networks and then out the other side put it back in a form which is useful to the application so so it could be you know text in images out it could be text and images in text out so and the you know the thing that's then easy to predict is that this is going to continue to happen and um transforming data from your database is going to be necessary to get it into these um neuron networ works and and so that's basically how we've built the framework is to make it flexible enough to be able to move with this constant fact excellent so the the foundations were strong the foundations were first principles right and and so regardless of the changing broader landscape it's still it's still you know a solution there that is based and rooted back in you know not new technology but you know the foundations and the the first principles of how things were put together I think obviously we could chat about this all day let's do a bit of Show and Tell Duncan let's get the demo up and and show our audience exactly you know what you're doing in action and we can continue our conversation as you're making your way through the demo if that suits yeah absolutely so can you see my screen now y yep all good right so as we said um earlier SuperDuper DB is on GitHub under SuperDuper dbup dup DB we're apache2 license um python package so you can also install us from pip and here lots of information uh which you can use to get started and you can also give us a star if you like Shameless plug go for it um yeah um but if you navigate to the examples directory you'll see um a bunch of of uh jup notebooks could you zoom in a little bit on the the text size there Duncan please yeah exactly there you go so in the examples directory um from the main project you can see a bunch of different examples doing different tasks integrating with different databases um you addressing different issues and so I've got two of those sort of here today um and one of them I'll go through and sort of somewhat slower pace and the other one I'll just use to show off what we're capable of so the first one is this question your dots one and I'm just going to use it to show you what a typical workflow might look like um but in both cases we're doing search examples because they're you know they're nice to show and and fun but um to reiterate this is not just about search this is about connecting AI ml with your existing database um yes anyway um right so if you open the quest and the donks notebook the aim of this notebook is to um demonstrate how to take a repository of documents they could be markdown documents text documents PDFs and to index them um in such a way with Vector search that you can then have a conversation with the content of these documents so you might um have uh heard of the concept you know of um enhancing language models with with some documents um and openai now has the GPT service which is related to this the difference with what we're doing here is that you can own the whole process so you can use your database you can use your private documents you don't necessarily need to send any requests to an external API provider so we allow developers and organizations to retain complete control over all parameters um exactly so in this example just just because it's simple and it's fun for for developers to get going we'll use models hosted with open AI but um if you navigate back to the examples directory on the open source project you'll see um the same use case implemented with open source models right so let's go get going so so typically um much like using a regular database um the first step in a SuperDuper DB application is to connect with your database via SuperDuper DB so it always looks a bit like this your database is super duper of your database so we're turning your database which here is mongodb into a super duper database essentially and we're going to be using a spect to search that's this configuration here so I won't get too much into the weeds of what's going on there and suffice to say that I have my um Atlas account open here you can see that and we're going to be using Atlas Vector search to to do this so currently nothing here related to this application and you'll see that using superb I can automatically connect to Atlas and index my documents by Vector search as well leveraging AI models in Python okay so I love that focus on developer experience the fact that you're sort of taking these common patterns of things that people need to do and just making it kind of a single line of code it's much more declarative I know the code that's going on under the hood there and it's like it's much more declarative to have that in that sort of setup code I like it yeah thanks um so so this is by the way just to give you a glimpse into the future this is really something we're done doubling down on this declarative aspect and we're going to try and draw parallels between installing um models and this kind of item potency so um infrastructures code style um deployment of models onto data um anyway we can talk I'll have questions afterwards but I'll let you go on with the demo so in this directory that I'm in I actually have the documents the documentation of the same project that we're using here um so it's very meta one level up here so I'm actually just going to search this directory for markdown files which we using to actually um this is our documentation here so I'm actually going to be creating this chatot which we've created using this notebook right so this chat bot is indexing this documentation and I'm going to show you exactly and I I I do not exaggerate in the slightest this is exactly how we built this chat bot cool excellent so here I've just indexed that directory sorry I've just loaded the markdown files in that directory and I can prove that to you here um I've turned them into chunks and here you can see one chunk so it's markdown which means I can show it nicely so this is actually markdown being displayed in Jupiter and it's you know full of bits of code there's explanations of what to do next and so forth so we want to actually cre a chat Bo which when a user asks something about our project we'll tell him the right answer and maybe even give some little coits to to tell him or her should I say um what to do right so I have those chunks now taken out of the documentation directory and I'm going to now insert the chunks into mongodb so here we go a standard mongodb type query um executed with our super dup DB connection all right so now it's in there so if we go back to um to Atlas here we can actually see the data in Atlas so it's just a standard insert and there you go there's the data right exactly so um now that the data is in the database I want to make it searchable and the reason for this is I want to create a chat boot which not only gives me reasonable answers to my questions but informs its Answers by the data which I am I have stored not something it learned off the internet from 1989 or something I want it to actually um use my data my data only so in that case the documentation so in order to make that happen I'm going to to create a vector search index so to make this possible I'm going to use an embedding surface this is the service which turns text into vectors of open Ai and I'm going to create this as a model in super duper DB so this model now is kind of in our universe of of class instances and just to check it there you go we have a a vector I could also print that out you know just to persuade you that's what's actually going on there you go yeah exactly so those things are now going to go into database and we're going to leverage mongodb Vector search with those vectors so how does that work so what we're going to do is we're going to define a class which is there for the purpose of listening for incoming data and turning it into vectors with the model I just instantiated that's called a listener you can call a watcher or continuous stream capture something like that but anyway we call it a listener I'm just going to create that that object here and a vector index in our universe wraps one of these things um and then does the necessary Grant work under the hood to connect it with for instance atos Vector search and mapping incoming data to these vectors you as it comes in so it's more than just you know turn the data into vectors it's more configure the system so that when data comes in make sure it's vectorized and connected to the vector search engine in the background which is Monga be at Vector search so yeah exactly so as Mark correctly said um sorry I'm on the wrong tab now there we go as Mark correctly said um this is kind of declarative so I'm just saying this thing exists but in the moment where I say it exists it's doing a bunch of stuff and that's kind of speaks to this fact that we're trying eliminating ETL and ml Ops of course there things happening in the background and things depend on other things and and tasks are being executed but it all is kind of baked into to the design of the project so in the moment in which I um I declared this the documents which we have in in the database um the text was converted into you see it here into vectors so they're now there in Atlas so every document in this collection now has a vector so that means for every chunk of documentation I have a vector which represents what does that document mean do you find mongodb Works particularly well for this because you're now entially creating extra Fields inside existing documents sort of on the Fly and I mean that's that's something that that's harder you're going to have to store that somewhere else in a relational database or or possibly modify the existing schema to be able to do that kind of thing was was more be like a natural choice for for building this kind of system absolutely so so even before atos search um existed we were doing this um and the reason is it's like super flexible so you have the document schema sorry the document model and um you don't need to elevate you don't need to escalate permissions for this to be possible like you would if you were in modifying schema and um you know you don't need to do any kind of elaborate joins to actually inspect your data points so if you're actually a machine learner they work you know just being one myself a data scientist or an AI engineer they tend to work very experimentally so they'll they'll do something with the model train it do some improvement store the the outputs and then maybe it won't work or it won't work very well and then they want to do debugging and debugging isn't like setting break points it's like looking at individual data points and TR you essentially looking for patterns and crosscorrelations between the human readable data and the numbers this type of thing and that's so much easier if you have your outputs embedded in in the documents that have the original data in it and yeah I really thought about that but debugging data is hard and it's like makes sense to storing it along the way there a whole world of like depth in this that doesn't even get talked about I mean um but it's coming I mean and that's kind of our hope that this is bigger than just oh now you get to search your data no you get to work differently with ML and data and hopefully in a more natural way um yeah so now in the next cell I'm is going test that the search has work so um can you explain Vector search indexes with super duper DB so this is a search which I'm going to execute with our query API and it looks like this it's a classical mongodb query and I could add more here I could put filters like filter for a brand or or some reg X there but I'm going to um combine this with our Vector search operator which we call like because uh I'm not just doing Vector search I actually want to use this engine to compare something in some way like a document which should be like my data so so forget about Vector search I just want to be able to do the task of comparing something with something else and under the hood I'm using Vector search to achieve that but you know in principle it could be something else in the future so if we now go back here um go back to database is Sor I have to whoops nope didn't want to do that I maybe someone can tell me how to do this easy more easily um now I have uh a a vector index in mongodb which I've just automatically connected to with this python command and it's now indexed all my documents so um and you can actually see here on the screen how how this was created at the same time we executed this query so um we can actually sorry exactly so here here here are the results of the query so it's found documents which are similar to The query that I've entered so this is just printing the the text of of this query which I've entered so now basically we've set the thing up so that if I input certain sentences or questions I can find similar documents so that means we're um very well set up to do the thing we came here for which was to actually chat with the documents so that's the second model so there's two models in this and so I just need to add these two models the second model is a chat model an lrm otherwise known as an lrm and we're using open AI functionality but this is just know can be replaced with drop in Replacements from open source where you actually post the thing on your client or on a server exactly so now I've added it and there are two models now in my system the same command as before add this this model and now I'm able to ask the thing to do to work for me so I'm going to ask it to give me a CO to set up a vector index yes here is a CO snippet to sell Vector index DB add and that's correct it's actually we just did it so and you can actually verify that here this is ENT where it's retrieved the data from is correctly formulated the answer and we could also ask other things um you know tell me about something more General yeah and it gives me more general information about the project open source framework it's basically spinning out some of the stuff which I've been parting what's kind of funny there is that that output actually answer one of the questions that we've been having in the chat so um it is not a database itself yeah yeah that that's a typical so yeah it's like giving you this database like feeling but it's not a database it's but that that was like the original intention that when you're using this it's a natural extension of how you like to interact with your data with your database client but it gives you a few more things that you might wish for uh yeah okay there was another question there Duncan before we move on uh which was adding external data references in addition I think I know the answer to this but you're much better place to to to explain that one so I can actually show you in this other so I'm just going to skip what's going on but here's an example of an external data reference so in the video search example I'm I'm recalling um a a single data point from from um from the database here and you can see here it's not actually got any data in it it's just actually got a URI M and so what happens here is that we have a far system which is connected to the system which is visible by all nodes which uh might be running and we map these uis to files on this file system and load them at um at select read time so so when you're reading data from database they get loaded so um I can give you another example right here when you say it's a file system is this something you've implemented at the python level or is it literally kind of a smart file system under the hood um well um no no it's just a vanilla file system which you have to configure to work with your with your with your super dup DB deployment okay so so there's two ways so for instance um here I've just loaded a document from the system and you see there's actually a a python a native python image in this document so it it feels like I just loaded an image out of the DB but that's not really what happened what really happened is that it loaded um the the the URI from the DB and then found the reference and then loaded the data which was automatically downloaded to this file system but the developer experience is like I'm just loading D I'm just loading images from from my database so you can kind of forget about that so that's the beauty of yeah nice and if you're deploying it on the cloud that would mean like maybe having a shared file system which is um mounted on all of your nodes or um we're also working on impl me support to load directly from uh object storage and that kind of thing excellent so I can just if you like I can just show off sorry excuse me sorry live streaming full power number one turn off your speaking device in the background it just went off hopefully nobody else um I thought I correctly Bernstein I think um I think it's an emphatic yes to this question but I'll let the the super duper DB guys answer this one no this is exactly the uh the process we just described right it's exactly for that right and and again right generating vectors is just one very obvious machine learning use case it's just special type of machine learning model embedding models uh and with the same and the same Simplicity you can pretty much uh run any model perform any inference job can even train models with super duper DV you know as Duncan mentioned right but D just showed we in this case we just simply connect to open AI an externally hosted model hosted by open AI but you can also safe host your own model uh with super duper DB so we have several use cases we using m7b models for example from hugging pH and so on so it's exactly there right it's it's it's connecting models and and to easily generate embeddings and then of course enable Atlas Vector search and then on top of that build a variety of of applications and you know this this rack it's called retrieval augmented generation really kind of context feed to the large language model it's a great application right but this is just where it starts in our opinion and this is text based and we're going to see like uh Duncan is uh showing now already a a video search case we have an audio search use case implemented we're currently working on a fully mod multimodal Rec application which which can work with video text audio and images at the same time so it's it it's uh it's really about the flexibility and and kind of the the speed on on how to implement this because as D mentioned earlier right this this chat with your dogs I think many Enterprises or or companies or people are are building or trying to build that kind of application and Duncan just showed how to get this done and we have live coding sessions uh how to really can you can set it up in in one hour right of course there's a lot of fine tuning and and chunks and what model to use but really get the MVP going and then is very easy to bring this introduction and deploy with fast fi or whatever and of course at some point you need an interface and when you look when you check our docs we just implemented ourselves straightforward react interface but you know you can set something up with streamlit and really deploy it so that that people in your team and other people and users you know product product people can use it it's it's really that simple excellent and and and I just left Muhammad's comment up there I think Teemo as you say you're totally agnostic you don't mind what models are behind this whatsoever so pretty much everything works right right we have preconfigured with uh uh pre ready Integrations with open Ai coh and Tropic G AI but it's it's it's very open right you can build your own connector you can pretty much integrate any API and we're working of course I mean and it's an open source project so we kind of encourage everybody who is active in their space to contribute and you know bring whatever AI framework it is we're currently working on for example line chain integration so that for these applications you can use kind of this display toolbox uh uh to make your Rec applications more sophisticated for is for us it's really this platform approach to kind of Empower developers to to you know exactly what I mentioned before leverage the whole ecosystem excellent excellent sounds great and I know I'm just going to surface up and um Thomas put up a a question here as well too could the interface be SharePoint my initial question would be why am not a SharePoint fan here obviously some people are still using SharePoint I have no idea fair enough fair enough Sho what's SharePoint I mean should I know what that is no well I would say ultimately if you can build an API on top of this as long as SharePoint can make HTTP requests yeah you can build course on top of this okay nice thanks Duncan we we hijacked um your your the latter part of your demo and you we're jumping around there we'll let you get back in for another bit because I think everybody is fascinated and Amazed by what you've shown so far we got a bunch of positive comments in there okay great um so I'd be interested for whoever said that to tell me more about SharePoint sounds like something I should know um someone's getting feedback I will M myself yeah there we go so in this demo um it's essentially pretty much a very similar sequence of steps so you'll see it looks very similar um and connect to dat to the system um create um or further down create uh create a vector index so the common pattern is that you'll go through a similar sequence of commands we're just connecting different bits of the Python ecosystem and in this case it's actually um I'll show you the data that we'll be interested in sorry I'm jumping around again so if I click on that so you'll see this is a video of a bunch of animals but we've also done this for Hollywood movies and so forth we'll be blogging about that in the coming weeks and um what I've done is to Simply in in inverted commas insert this video into the database which actually means something more sophisticated under the hood but the the developer experience is simply insert the URI of this video as I mentioned into the database and a bunch of stuff happens so you can kind of see it through logging it's being downloaded it's being processed in some way and then I'm going to um set up a two models one using open CV which is um quite an older package in the open source python ecosystem for processing visual uh videos and images and so forth I'm going to use that package to make my own inline operator or model to to turn the video into something I easily work with so I'm converting the image the video into frames sampling them and doing it all in Notebook so so all the code I need is actually in this notebook just D sorry interrupting but this is exactly what I meant before right you can pretty much bring in anything from the python ecosystem to kind of you know yeah so my project doesn't the project doesn't know anything about open CV so that's the CV2 um until this moment where I've added it so it's flexible enough for you to just you know mix and match and and put together uh models from what you've got installed on your system but as before once I have that I can add it to the system to ingest uh as videos come in to turn them into something I can work with so there's an example of a frame that's been cut out of that video and then I'm going to use this time not models hosted behind behind an OPI API but actually models which are locally hosted so don't need an internet connection after I've downloaded the video um so this is clip which is for comparing text to images um telling me which text and images are similar create a vector index out of the result of processing this video and um and then perform queries about this video so I'm on earphones so you won't hear the Ducks cracking but um right so just to show you that wasn't a fluke uh um that's amazing amazing such such a small amount of code it's very cool great and um thank you so much um yeah there was an elephant there but it had already left the screen at the moment yeah so um and and the cool thing about this is if I set this up in the right way if I insert another video now it will instantly kind of kick a process into action which will then go through the steps which have already been executed once in this notebook and again so so that the system stays up to date so I could potentially use this to index my uh podcast website which has a bunch of videos on it for instance and I would simply need to just say request W index of this URI and by setting up the system I would have this uh this content based search of my videos exactly excellent we got a couple of questions with regard to what you've just shown there Duncan and apologies that the banner is obscuring you I I'll remove it now in a second as well too so are you taking a frame per second or all the frames or how how does it work yeah so that's all kind of open to you how you to do it so I can show you the the logic we have a sample frequency which you've set here we've hardcoded it but you could also potentially add this as a parameter to your model um yeah ah very good we sampling every 10th Frame okay okay and an equivalent sort of area question if the video had captions on the video would that be picked up as well too um so if you have a video which has an audio track um you can potentially separate that and we have other use cases on our on our examples directory which um can then index audio files using uh an AI module which you can get on hugging phase which converts audio into text and then Downstream of that you can then index that text using a pure text Vector search mechanism there are also models out there that work directly with audio so you can you mix and match yes potentially if you if you have a video which has subtitles which can be easily extracted I think it's you have these SRT files you could potentially create your own version of this which actually aligns the images with some text so if I just show um here we have an image which is one of the frames which we've indexed I could potentially set this up so not only does that step spit out an image it also spits out some text which is essentially underneath the image and then you get a much more interesting result right the main difference is that we're using clip here and clip understands both so it can generate embeddings for text and images and they basically both live in the same Vector space along you to use both as operators right but for example when you have like a podcast or somebody just talking and you just had a straight face and he's just talking most likely would you would just use a a text embedding model because it's more efficient or more accurate or or whatever but really this is kind of where it's going looking at large language models they they will not remain language models but like multimodal like general intelligence right I it begs the question too with um you know if somebody was doing this how like you obviously have identified a number of models that suit obviously the use cases that you're demoing here at the moment but with so many models out there how do people you know is it back to experimenting Duncan playing around with different models to see how good and how performant they are um so yes and no so it's not just about looking for best performance it's also looking about what your operational constraints are so are you allowed to Outsource work to API providers are you even allowed to send your data there what um what turnaround time do you need for these inserts like how quickly do you want things indexed and modular those requirements then you can then start experimenting but there are also leader boards for instance on hugging phase um which open source embedding models achieve best performance and so forth but yeah the the the the practice of you know trying things and measuring performance and then um then iterating is is not going to go away it's just taking a slightly different form but luckily in this new world where you have pre-trained models and systems like super ddb the turnaround times aren't months or quarters but you know a few hours and then there's great work you know with M listeners from LinkedIn right there's many people who are basic their main job is basically trying different models and and and kind of talking about it and also with super duper DB as we learned it's very easy just to try out different models very easily and like just just to qual checks whether they work or even set up quantitative experiments to Benchmark them excellent well we're getting a lot of really U you know that's incredible it's amazing Etc I think you've blown a lot of people's minds and hopefully they're all heading across to the repo and going to start to play with this as well too D that's that's the message that I was getting [Laughter] earlier perfect I know we're we're we're we're we've been chatting for an hour and I know certainly we could talk for a lot longer Duncan was there anything we didn't get to touch on in in the demos there that you would have available for somebody going to the repo and playing with this themselves um I mean any other use cases or or other functionality do you mean either either really yeah and I mean one thing definitely to think about is that we've really thought about this in terms of production and um large scale use cases so we do have the ability to set up a cluster of services to do this this it's not just in the notebook you can demo something it really is with a view to production do like when you productionize this because obviously you're taking this Jupiter notebook which is very much a sort of sketch pad of code like how do I then turn that into a product so the jupyter notebook is like you're installing the model and then once it's in the system you can we can spin up a vector search service and a change data capture service which will listen to your database for incoming data and we have have also workers which which we can paralyze the preparation of vectors on and preparation of model outputs and so you spin up those services and you can either do the same thing from a notebook against those services so you just are faster or you can you run that that cluster of services to to handle incoming data much more efficient uh way essentially yeah I like it I like it and so because I think there's a lot of Frameworks out there thinking very much about the developer experience but then there's not so much about how do I you you almost have to uh lift and shift it into something else to productize it and I yeah I think that's for me that's the now the problem that I'm struggling with is like how would I architect something like that so to have that out of the box is is very cool right currently I mean this is what the open source package is at some point you have to deploy it right and you have to deploy it on on infrastructure on Hardware that is capable of of serving your use case and serving the models right basically being able to perform the compute but what we're working on is kind of a a fully scalable deployment which we call then like whatever managed service right very similar to to Atlas which makes it super easy to to set up super duper DB and have an environment which which SC with its requirements right it means once it's deployed it really there's no additional infrastructure work with an additional use case so kind of once you've done that and installed and deployed super duper DV you can just Implement Implement more and more use cases and and complex applications without any additional infrastructure work right then it's really just you know uh you know creating these workflows and applications in Jupiter and then just you know ship them that's so in your view team would that be I suppose one of the biggest challenges most companies face you know in terms of all of this infrastructure and bringing it all together that essentially super duper DB are are solving yes exactly I mean it's it's two things right it's one is really this this a single uh deployment of all your AI you know once you get it done right and deployed in a scalable fashion but I mean you can also put it on on a single a00 which is just capable of you know serving your large language model use for your application and for the next application you deployed on another machine um that's also fine but but basically how we intended to be for bigger Enterprises is really to deploy in one place it's a single deployment which is scalable so then is this is one and then you have this complete endtoend development environment which is which is allows you to work super fast but super flexible so really it's all in one platform that was a great demo that definitely came across all right maybe just to link it in with mongodb I mean that's now I think um for mongodb developers like the opportunity to use mongodb with AI will make um deployment so much easier because you have then an environment in production which looks very much like your development environment instead of like moving from data breaks to mongodb as an option like or maybe you want to do both so you still want to but but to have as few moving bits and pieces between development and production setups just allows you know all sorts of much more virtuous processes to exist within your organizations and even for a whole team to manage the full production cycle um yeah it's a big topic it is a big topic yeah I think you're right I think having the services there to to sort of spin up on demand especially if they scale to zero um you know makes these kinds of things so easy it's certainly one of the reasons we've been seeing adoption adoption for the new um serverless instances on Atlas is because if it's a development environment you're perhaps only using it for a few hours a day and then when you're not using it it's not costing you anything exactly for so many use cases uh in e-commerce you know at night nobody's shopping right or given times but at some point you have these these Peaks and then then you need infrastructure scates very flexibly and I know in terms of the demos obviously with the chat and also the the IM the video search Etc as well too um is there any Industries or specific areas and maybe this is back to being wholly agnostic as to what gets plugged in in the language models Etc as well too that AI is particularly suited for or that you consider what super DB has enabled in terms of you know connecting all of the various pip work for want of a better word that are most suitable for what you're solving today with super DB is this you know are you trying to head down any particular direction try and say look we are the best platform solution our infrastructure etc etc for these type of areas or are you utterly agnostic much like the models that you support as well too exactly the the the the approach is to be completely horizontal right and then for when you look at different Industries I mean in every industry there's like the the the kind of outwards facing applications so specific e-commerce or or health care or Finance or whatsoever but all of these are Enterprises and they will have loads of internal use cases basically targeting their own business operations so um I mean it doesn't really matter d i we have a strong background in e-commerce and of course there's wonderful use cases where you can really take off the-shelf models like we saw a clip can use to build a really strong search applications recommend systems and so on um yeah but it it really depends and we just want to basically Empower people kind of you know their industry and you have some understanding of AI kind of to imple implement Implement these things and we don't really want to go into the on application layer where we can kind of offer a domain and Industry specific applications of course for forign applications so we going to provide um templates for rack and so on what we already doing with these uh demo notebooks right mhm mhm perfect perfect so essentially there you know there there's there's no there isn't a niche there isn't a vertical you're trying to be as broad as you possibly can and I suppose this probably Dives back to what you said at the beginning Duncan that you know you guys have been in this space for a long time you know it's comes from your machine learning Roots as well too given that do you see anything in the future coming down the tracks that you're not addressing just yet anything in the AI space that you have observed recently that you say is going to be a significant next change you know I think the public were aware of AI and particularly generative AI with chat GPT when that came on stream and you've got Granny's and aunties and everybody else using it and kids trying to do their homework with it um and that's an entirely you know that's just a an understandable space type a question get a ream of answers a ream of text that you can cut and paste into your homework perhaps which my children don't do I promise that and um but in long-winded question what's looking forward future casting a little bit what do you see coming down the tracks perhaps given your backgrounds and understanding where maybe the space might be in the not too distant future well um you know I think the the reinforcement learning is is now back in Vogue and that's something which which all research companies are are working on um and so that will probably mean if they you know work out how to do that well we'll get more in the area of Robotics and the things that you know from sci-fi will might and you finally become reality and I think that is actually um quite exciting for what we're doing here because then uh the production systems will be much more complete it won't be you know one language model listening for text and spitting out more text it will be an array of models acting in concept which will have some kind of memory of experiences and that will be a database and the models will be talking to the database using the data in the database to inform whichever predictions they're making in real time to act um so what we're hoping for is that um there'll be a new uh no new trends in in how to build software systems where AI is like a central component and AI with data and it's not just about training a model on on a big instance and then having the model ready to do whatever you want it's going to be um an integraded system of models and data and code uh doing things which we wish for for for generations and now finally we're going to have people sorry robots cleaning our living room and that's fascinating that just just the architecture you described there I I don't think I've really considered that before this it's already a reality sorry to interrupt you but so for instance Teslas um I believe they have tens of models running in parallel on an onboard GPU gpus I don't know much about the hardware and these models are trained separately but then once in a while they train them in concert so um they're constantly providing feedback from the vehicle back to like Central headquarters or depending on how you set it up maybe I don't know and um yeah this type of system I think will be more ubiquitous and much easier to build and you won't require like you know highly secretive operations to to do this but um so when we built e-commerce we we were interested to not just have semantic surch and reverse image search and autocomplete um and classification running separately but we wanted to to train the whole system on you know data streaming into the website in real time from users clicking on products and so forth and then you have an e-commerce Enterprise AI system just just continuously consuming the world and and getting better and that wasn't possible at that time but I think the with a system like we have it's going to be possible and like looking ahead into our road map having this type of not just inference but also feedback from from the production system back into training that will be a big part of what differentiates us from competition excellent excellent so I think look we we've gone through so much I think the demo sparked so much interest in everybody um and I think a lot of people are enabled these days by what they're seeing and what they can build now to maybe think of you know solving their own problems or you know kind of setting up something on their own so to step out of the technical piece of super duper DB for the moment just to essentially setting up a company and you know going out solo as entrepreneurs yourselves and Founders into this space what advice would you have for anybody going into this area themselves about coming up and you know doing their own startup or based on your own experience of building super duper DB just before we close out either of you it's a tough question right there's so many things to say I think in the end um if you have a great idea and you think there's really a market or like a need for I wouldn't say market right that's fundamentally wrong if you think you can solve a problem and really help people then I think also for me always um it must be it must be fun so you must be somewhat fascinated with the with the solution you're trying to to build right uh yeah I mean of course many people especially if you're further down the line in your career there many things to give up but I think it's wonderful right you you kind of uh learn a lot although you think you know already everything you already know a lot uh of course can be super stressful at times but the the rewards to high right even if you fail it's a kind of a person personal growth to be excellent I I love that response Teemo Duncan anything to add to that um yeah I I would agree with all of that I I I never thought I'd become an entrepreneur as like a teenager I'm not like highly risk how do you say the opposite of of us like comfortable with risk but um I think as a programmer or a technical person the risk is quite low because you know you try this for some time and you'll learn a bunch of skills which you can then deploy again in in a regular job if it doesn't work out or you'll maybe get lucky and so why not give it a go I love that why not give it a go most definitely and and on that very optimistic note I think we can we can do our best to try and wrap up any Mark anything there from what what's been shared by Teemo and Duncan that you'd like to add to from your knowledge and the and particularly the python space that you occupy a lot of the time um there's nothing specific around the python stuff except for the fact that I do write python code against mongodb quite a bit and so that I can picture some of the things that are going on but under the hood to make things like index creation idempotent and it's like you know there's some really nice patterns there um that essentially you end up rewriting in application after application and so to have them them already developed for you is doing a lot of work under the hood and so yeah from my perspective I just kind of want to uh want to take it out and play with it and start start kind of feeding it data and um and Building Things so yeah it's it it would definitely be my my go-to toolkit at the moment awesome that's great to here excellent and and obviously look we we want to make sure that people go and visit super duper DB so super simple to get there pardon the pun and um once in there you'll see the straight up the top the links to the GitHub repo and everything else else where you can get started and play around with all that fabulous stuff that Duncan has been showing us as well too if you want to learn a bit more about how Vector search can help you so you know go to our link there MDB do link is our shortcut MDB Vector search will take you to the or just Google Vector search as well too and you'll see again like we discussed earlier how performant it is when your vectors are alongside the data that they're associated with as well too anybody who wants to play around with Atlas feel free to do so we have a really generous fre here for your proof of Concepts and anything that you want to do so go in there to that link MDB link CC cloudconnect that's the name of this show Atlas 23 it'll take you straight through to our registration page there for Atlas you can get up and run for free and Mark touched on it earlier as well too if you go beyond the capabilities of our free tier we have a serverless tier that can scale up and scale down and and back to zero practically as well too H so plenty plenty of of flexibility there um if you want to learn a little bit more about what we've been doing with super duper DV um we have a link um there and super duper and you'll go to our partners page where we show what we're doing together and I think that's a relatively new page that we put together it's nice to see uh super duper DB up there amongst others as well too um so above all I think this has been fascinating I had a boss who said the reward for good work was more work and I think the reward for doing an excellent stream is we're going to have to get you back at some stage in the future as well too maybe with some more examples or some more in-depth and maybe even Duncan to leverage a a longer stream time where we're actually going to go through step by step and nearly almost a workshop but look I I'll keep your details I'll get you back on the show most definitely because this has been superb and I think as all the comments have been as well too an an eyeopener uh for most people and the ease and also the flexibility that your super duper DB brings I think to anybody trying to build something in the AI space is ideal so um without further talk from me and I talk a lot I'm Irish it's my apologies um Teemo any last words for our stream before we sign off and then I'll go turn to you Duncan as well this was great hello fun we come back with pleasure yeah there you go see look you said you'd come back already so you will get an invite Duncan yeah likewise uh please try it and uh thanks thanks for having us glad we come back excellent and look thank you for everybody who's commented while we were live streaming as well too sorry we didn't get to all of the comments we will go back in through Linkedin or YouTube wherever you left the comments and try and add some information if we can and help out but above all go check out SuperDuper DB see what the team have built there and you know if you do build something certainly get in touch with Teemo and Duncan and showcase it as well too we'd love to see what happens and as Duncan Community forums it's uh we love to see what people are building yes very good Mark you're doing a good job there now I'm sure I have a short link all of that sort of stuff that needs to go up.com there we go and we do have a also which year everybody's very much invited to join right to ask questions when they run into problems and we're happy to to share wisdom there as well we'll try and stick a link in the description after after the stream you also find it on the read me and so yeah perfect perfect and for those that are watching this live and you know as I said at the beginning we do this every Thursday and you can also rewatch it if you found an interesting please use your socials to share it out further the more people we have discussing these topics coming on and joining these shows the more information that we can impart and the more value and fun that we get out of it as well too but for now from me Shane in Ireland uh mark up in Scotland Duncan where are you again did you say or you didn't say first time I'm in West Germany today West Germany today Teemo where have you gone to these days I'm in Berlin I'm we're both located in berin but you thinking you're in Death Mode or no what is the place it's a small Old Town West yeah excellent excellent well from Ireland Scotland and two in Germany this has been a pleasure to chat to you both Teo Dunc and mark thank you for being roped in to help out on this episode of cloud pleasure absolutely above all thanks for everybody who joined us on our stream please keep an eye out for future episodes from all of the the folks here at mongodb TV and uh we look forward to welcome you again take care take care everybody

Facebook Icontwitter iconlinkedin icon
Rate this video
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Video

The Atlas Search 'cene: Season 1


Dec 15, 2023 | 2 min