YouTube to Text Converter

Transcript of Obsidian Smart Connections UPDATE 📝 Rediscover Notes With Obsidian AI

Video Transcript:

How do we make AI more personalized? How do we stop them from making things up, from hallucinating? What if we could use our existing note system to ground the answers that the AI gives us, turning our notes from static text into dynamic filters? These questions are a large part of the reason why I started using Obsidian in the first place. I wanted to create a local repository, a private database for myself based on my own personal knowledge management system that I could then use as a filter for the answers that the AI gives me. not just a personal knowledge management system, but an augmented personal knowledge management system. But the more notes I make, the harder it is to actually find the notes and make the connections between them. It starts to become difficult to surface those connections and find the insights that I know are sitting inside of my vault. I've been building maps of content, but they're still kind of all over the place. I need a way to enhance these maps of content to tie them together to surface connections. The new smart connections plugin takes this enhancement one step further, giving me custom control on how I can build out these maps of meaning and tie them all together in one place. A smart environment, a place to make smart connections and have smart chats with the AI grounded in my own knowledge and research. In today's video, I walk through not only the major upgrades to smart connections, but also how smart connections works behind the scenes in the smart environment to create a map of meaning, a vector database, so you can augment the retrieval of the information within your vault and generate better outcomes, making it much more effective and helping you find all of the value that you've already put into your Obsidian vault. Hi, my name is Callum, also known as Wanderloots, and welcome to today's video on smart connections. How you can augment your existing Obsidian vault using both smart connections and smart chat. When I say smart connections, I mean that literally. The smart connections plugin takes your entire vault and creates a map of meaning with a smart environment. You can think of this as a data layer that lives beneath your Obsidian vault. Every note gets a score on how similar it is to other notes. This similarity score makes it way easier to find related notes than a keyword search. The key here is that smart connections is not making the connections for you. Instead, it's surfacing connections based on what you've already done to help you find all of the work that you have inside of your Obsidian vault. With the new upgrade to Smart Connections, these features are more powerful than ever. Brian, the founder of Smart Connections, was so excited about all of these features and how people could use it that he offered to sponsor this video so that I could teach people how they could use all of the intricacies of Smart Connections to really augment their personal knowledge management system. I use Smart Connections all the time. I basically just leave it open in the side when I'm writing and researching because it helps me surface all of these insights in my notes that I've already made. I hope that you find this tool as helpful as I do. A reminder that if you do find this video helpful, please like and subscribe. I'm working on making YouTube my full-time career, so any support you can give me is very much appreciated. Now, let's dive into the new smart connections. All right. So, now that you have a bit of a better idea on why I'm so interested in smart connections and how much of an impact it's actually had in my understanding of how to build Obsidian in a more powerful way for myself, let's take a look at today's outline. So, today we're dealing with the smart connections upgrade which is to version 3.0. So, I'm going to start by just quickly talking about the overview of what's changed since my last video. And in particular, there's been a functional smart chat. So, that didn't work in the version 2.5 that I tried, but it works well now. There's an upgrade to smart blocks, which I'll get more into later. You have more control over your settings. There's a random connection button, which I think is a really cool way to explore and wander through your vault. And there's also this thing called the upgraded smart environment that I'll get into much more in depth later. Then in part one, I'm going to give you a brief intro to smart connections, how to get it installed and set up. Then we're going to walk through how to actually turn on your smart connections to build that smart environment. And the cool part about this is that it doesn't require any LLMs. So, this just runs 100% locally. You don't need to run any complex software. It just works almost instantly right off the bat. And it gives you a ton of power. Like on the side here, you can see that already this is giving me suggestions on what's going to be the most interesting connections to make to my existing vault here. Then in part three, I'm going to get into the smart chat, which is what I wasn't really able to show you in the last video how it worked, but Brian Petro, the creator of Smart Connections, has been working a lot to enhance this feature, and it works now. So, I'm going to show you specifically how you can use an LLM and a local LLM like Olama here to run locally on your computer with a very small size. And finally, in part four, I'm going to walk through some practical examples, some advanced features, and some cool use cases, working through a new note that I'll make in real time, showing you how I can link flow states and vibe coding together using the smart connections and smart chat. And finally, I'm going to go through the conclusion here. I'm going to walk through each of these in a little more depth so you have a better idea on how you can actually engineer serendipity using a tool like smart connections. All right, let's begin. Okay, so the first thing we need to do is install and set up smart connections. So we can go to our settings here, go to community plugins, go to browse, and search for smart connections. So this is the same process as my last video. So if you haven't installed this already, this update button would just be an install button. So I'm just going to click update there. that successfully updated smart connections and now I can go over to options. So you can see here right up in the top right has started already working in the background. So this has begun instantly creating what's called a vector database which is effectively a data layer that operates in multiple dimensions that is the perfect map of meaning to give to an AI. So I'll explain that more in a moment. I'm just going to hide this now. So you can see here that there's a bunch of different options, a bunch of settings here that I'll get more into in a moment, but for now I just want to scroll down a bit and take a look at the environment settings here. So we're going to click on show environment settings. And this is where we get into more of the power of smart connections. The fact that there is this smart environment. So the smart environment is effectively just a data layer that's operating behind the scenes in your Obsidian vault. It's what powers all of this smart connections on the side and smart chat as well. The smart environment builds a map of your vault that's perfectly understandable by AI. So again, I'll get more into this in a moment, but I'll scroll down now just so you can get started. And if we go to the smart sources section, we can see here that there's this option for the embedding model platform transformers local built-in and then the embedding model itself. So this is where we start to get into the power of running something locally. and the first major feature of smart connections. So, I'll walk through that in a second as we set up smart connections. I just want to quickly show you that we also have this new icon down in the bottom right here called smart environment. As soon as I turned on smart connections and the environment triggered turning on this transformer and getting the embedding model to run, it created the smart environment where I can click on and click show stats and it pulls up an overview of all of the data that smart connections has built as a layer beneath my vault. So you can see here that I have 2,00 blocks. So this is like paragraphs effectively inside of my vault and 2050 of 2071 have been embedded into this vector embedding model. So there has been now a database effectively called a vector database that's been created with all of these blocks all of the paragraphs within each of my notes. Then these smart sources are my actual notes themselves. So you can see that I have 252 of 275 and the reason is because I have some folders that are excluded. Smart actions will enable you to connect to external sources if you want to. That's a future, more advanced feature. And then we get into smart chat threads and smart completions and smart context. And all of these go into the smart chat component. Smart messages and smart threads are the old version of these ones here. So they're going to be deprecated at some point. But that's it. That's really all it takes to set it up. But what what did we actually set up? I'm going to give you a little bit of background knowledge right now on large language models, embedding models, vector databases, and rag retrieval augmented generation. This will show you how smart connections works. It also gives you, I think, a bit of a hint on why personalized knowledge management systems are so powerful. I also will explain a bit more why it matters that information is kept local and how you can use your own notes to create your custom truth to ground the answers from the large language model so that it's not just using some generic system of the internet but it's actually using your own personal research notes. So the way you can think of this is a large language model, an LLM, an artificial intelligence chatbot effectively, has been trained on the internet as a whole. That's like giving it access to this entire global library, but it doesn't have access to your personal research notes inside of your Obsidian vault. So the goal here is that if you're asking questions to an LLM like ChatgBT, Perplexity, Google AI Studio, Gemini, whatever it is you're using, it's only using the world's library, not your own research notes. So, how do we teach it to actually understand your specific notes, your research, your truth that you've been building and cultivating and curating inside of your Obsidian vault? And I just want to quickly note as I go through this, I'm going to do this very high level. If you're interested in me making a full video on this so I can go much more into depth into how all of these pieces work so you have a better understanding. Please let me know in the comments. If people are interested, I'm happy to make a dedicated video on this topic. But for now, let's quickly take a look at the features that power smart connections. So basically what happens here is we're going to have four different phases. We generate the embeddings which is what just happened automatically when I turned on smart connections. We're going to introduce the ability to retrieve the information from those embeddings. And then we're going to augment, we're going to enhance our conversation with the large language model using those retrieved documentations so that we have a smarter prompt to give to the AI so that it gets a custom filter on the information that is generating and then giving to us with a much more reliable answer. So this is what significantly reduces hallucinations and fake information from the AI. So this is all part of what's called retrieval augmented generation because we retrieve the information from our own personal vault. We augment the generation that the LLM gives us. Okay. So let's take a look at each of these very quickly. So effectively what we're doing here is we're taking our information our obsidian vault. We're encoding it into vectors into you can think of it almost like GPS coordinates and then storing those vectors into a database. Then when we ask a question, the LLM or embedding model will encode that question into vectors as well and then check to see how those new vectors compare to the vectors that we've already included in our database. It'll then find the most relevant information and then pass that on to the prompt which is then given to the LLM to augment the generation that it gives us to ground the answer in our truth of our Obsidian vault. So you can kind of think of this like a map of meaning where you might have a topic like psychology or neuroscience or computer science and everything related to each of these topics gets vectors gets these coordinates that are stored close to each other if they have similar concepts. So for example I might have a note that I'll show you later where the concept of flow states involves a deep state of immersion in an activity. So that sentence gets converted into a vector embedding and then stored near psychology because flow state is a psychological concept. Each of these topics then gets a cluster a grouping across the map that pins them all together in a layer in a cluster that's sorted by how similar they are to one another. So you get little groups of information based on topics, based on concepts, based on shared information. Then when you go ask a question, for example, how does flow state relate to coding performance? The question gets mapped with a similarity search inside of your vector database in that map of meaning so that the AI knows how your question relates to what you already have in your vault. This is where you get the grounding truth. Then it will retrieve the context retrieval augmented generation to create a little custom snippet, a cheat sheet for the LLM to understand better what it is you're actually asking about and where it should be getting the information from. Not that generic library, but specifically your vault. Then it takes that information and it uses it to augment the generation. So that for example it could say flow states defined as optimal experience are related to coding performance and quality because they involve deep immersion in the task. Now obviously that's a very basic answer but you kind of get the gist where it's grounding the answer into information that was retrieved from my vault based on my specific question about vibe coding. So that's just a quick overview of rag and then Obsidian makes this all more powerful because it's all running on a knowledge graph which enhances rag into graph rag but I'm not going to get into that right now. So effectively you take the generic librarian that you have from the internet as a whole from something like chatbt and you turn it into a specialized expert based on your own research. So now that you understand a little bit more how it actually works and what just happened when we built this smart environment on the side here. Let's take a look again at the settings. So the default here has to do with the connections view and that's what we have here on the side. So you can see this 75 here that's showing the similarity. How close is this note to the one I'm currently in? The larger the number, the closer it is to one, the more similar it is. So this is where you get an idea on where does this note that I'm currently in, the smart connections upgrade v 3.0, fit relative to all of the other nodes inside of my vault. This is where we get that map of the vector database. And what's really cool is that this was just turned on by clicking into environment settings and selecting transformers local built-in and then turning on the nomic embed text. So you can also do BGE microv2. That's a really good one. And I believe the Snowflake Arctic embed extra small is also pretty solid, especially for very very tiny models. So if you have limited space on your computer, you can use these micro or extra small models. And that's it. That's all it really takes to turn it on and to start building that vector database, that map of meaning. This is why smart connections is one of my favorite plugins because this feature on its own, the ability to see how related notes are. For example, if I go click into how to take smart notes, this whole list of the most related notes just changed. It now just switched to show, oh, the one thing is actually one of the most related notes I have to how to take smart notes. So, this feature on its own is one that I often just keep open as I'm working. And I'll show you a practical example in a little bit, but effectively I'll just have this open on the side so that as I'm typing, as I'm changing things in the note I'm working on, I will see always how my note is adapting and changing based on the most related notes that I have, the most similar notes. Now, you can also go through and you can exclude specific headings. So, for example, maybe you have a heading called contains AI and you don't want any notes that have a heading of contains AI included inside of your environment, inside of your smart environment. That's totally fine. I have my conversations that are saved specifically excluded and you can go through here and you can curate what type of information you want to include inside of your vector database inside of your smart environment map of meaning. Okay, so that completes part two. You now can see how you set up smart connections, which really is just turning it on and then selecting the model that you want to embed. And I hope that you understand a little bit more what's actually happening there because I think that makes you appreciate how to take notes in a way that's going to enhance the similarity when you're including tags or topics, which I talk about more in my tags and topics video, how to organize your Obsidian Vault. That's all geared towards structuring your information inside of Obsidian so that this similarity gets even better based on what you're already doing. What's cool, too, is those blocks that you saw, remember how I said you can have smart blocks? When you click on this, it pulls up specifically the blocks that made this similarity score so high. So, not only are you just seeing what note is the most helpful, you're also seeing the specific parts of each note, the blocks that are the most related to the note that you're in. So, then you can just jump to the specific section and start reusing what you've already worked on. And let's say you're going through and maybe this one isn't as relevant. Maybe read write own doesn't really fit in with Obsidian AI plug-in review. Well, you can always just right click on it and you can click hide this. And that just hides it from this list right here. And again, in the future, the more you do this, the more the smart environment will eventually learn what you like to hide from your relationships, from these smart connections, and which ones you like to keep. So you'll actually be custom training how the smart environment shows you the most relevant information based on your own vault. And you can always unhide them as well. So the cool part, again, just to quickly finish up part one, is that all of this is happening 100% locally. You don't have to install anything else. The transformer model, the embedding model is so small that it just fits directly into the smart connections plugin, which makes it very easy for you to just run this from anywhere with any type of computing power. And anytime you make a change, the smart environment, like you saw there, will update in the top right. So, it's always working behind the scenes to keep your note as relevant as possible as it continues to add more smart blocks into your smart environment to power the smart connections on the side. And you can also see that a few things are popping up with molecular zettocasten that I have an entire video on. This is my way of structuring my information so that it's as powerful for myself as it is for AI. So I highly recommend checking out that video if you're interested. So that's all great and this is working on your own information. It's grounding your answers in your own research. But what if you actually want to start chatting with your notes rather than just searching and writing yourself. What if you could research using your own notes that are powered on this similarity score on the side? That's where we get into part three to set up the smart chat component. A reminder to please like and subscribe if you're finding this video helpful. If you have any questions about smart connections or how this system works, please feel free to leave them in the comments. Also, if you've been using smart connections already, please let me know in the comments. As the more we can share the workflows with each other, the more we can all learn together and then use these tools more effectively. We get synergy. Now, let's keep looking at smart connections. Okay, so now let's take a look at actually introducing a level of intelligence, the smart chat component here, which goes beyond just these smart connections with the semantic similarity with these scores on the side here to actually having a conversation with the model itself. This is where Brian Pro, the creator of smart connections, has split this up into smart connections and smart chat. And honestly, this is really great because it lets you focus on the settings for each one independently. So, I really like that that feature has been updated. And effectively what we're going to do here is if you remember the embedding model, rather than using an embedding model, creating the meaning pins for each note, for each block across your note, we're actually going to get into how do we take this information and we give it to the map reader, the ctographer, so that it can go and find which feature it wants to use as context, which smart blocks to bring into the generation of answering your question. So, this is where we get into the retrieve component where we're retrieving the information based on the similarity of what we're searching and then giving it to the AI. So, if we go over to smart chat again, there's a bunch of settings here, so I recommend checking them out and going through, you can customize this as much as you want. You can introduce a system prompt, which will effectively create another layer of prompt that goes into every one of your prompts. If you want to have specific instructions for particular types of notes or particular formats that you want to have the AI give you the output, you can customize all this here. And then we get down to the model section. And this is the key part here because you have a few different options. We can use a local model like LM Studio or you can use a cloud API like Chat GPT, Gemini, Perplexity, Claude, whatever AI you want to. So you can see here if I click on models, I get a bunch of different options. Azour, custom API, Gemini, Grock, LM Studio, Lama, OpenAI, DeepSeek. All of these options here allow you to connect your smart environment that we just built to the large language model so that it can augment the information it's using to give you an answer. So, you have two options here. You can do local, which is LM Studio or Olama, or you can use a cloud API like the other ones. But if you're using an API, this is taking your local computer, this client here, it would send your request through the API, the application programming interface to the cloud server, generate the response, and then send it back to you. So this would actually be sending your information to Google, to OpenAI, to Anthropic, to Grock. So if you don't want that, if instead you want to keep this all local, you can choose Olama or LM Studio. If you want to use LM Studio, it's very easy. It's a user interface component. So you can just download LM Studio. Then you would go to search discover. You would find a model that you want. Download the model and you could pick the one you want and then go over to developer, select the model you want to load. You would just load the model here and then click on the server and start running making sure that corores is on. And I know that probably felt like a lot way too quickly. So I recommend checking out my other video on how to run local AI with Obsidian if you want much more in depth on LM Studio. But for now, let's take a look at Olma. So, Olma is another way to run local LLMs to run the large language model locally on your computer so that you're not sending any information to the cloud where they're taking your info. So, what you can do here is you can just go to download, select your system, and download it. And once you've done that, you need to select a model. So, there's two different ways to run this. You can either run here where it pops up with a chat GPT kind of interface. However, this doesn't let you connect to your Obsidian vault. This is just so you can talk with whichever model you have downloaded directly. This doesn't take your information from your Obsidian Vault. So, instead, what you need to do, it's very easy. You go to your terminal, which I know can be scary for some people, but all you have to do is just literally type in Lama run and then your model. So if you go find a model here, for example, like Gemma 3, which is a very small one. See how it's 3.3 gigs, and you just copy this here and then go over to your terminal and paste it in, it will automatically download the model and start it for you. Then you can just have a conversation directly here if you want to, and it will give you answers. But while this is running, so this is now just running in the background. I can go back over to Obsidian, and I can choose the model I want. I can select Lama, click refresh models, and then if I click here, it shows all of the different options that I have available that have been downloaded through Lama. So, this is nice because you don't really have to run some extra software like LM Studio. You can just run this in the background through your terminal. And you just have to turn it on and then click go. So, I can select this model right here. It works here. It automatically pulls in the Olama host. And now my model is connected to the smart chat. I'll show you how that works in a second. I just want to show you quickly that there's a bunch of environment settings here. So this is the same smart environment settings that you have with your smart connections. My understanding is the smart environment settings will always be synced between smart chat and smart connections. So you only have to make a change in one spot. But here is where the difference between smart chat and smart connections really comes into play because we're selecting a large language model as the model for the chat. So we're actually beginning to get into the augmentation component rather than just the embedding component. Remember smart connections the model that you picked was a local embedding model rather than a large language model like we just picked with Olama. So what that means is now if we go back here the embedding model is going to create this similarity score this 7569 and rank how similar all the different notes are to each other in that vector database. But the chat function now can start to use that and we can begin adding context to the chat or having conversations with it. So you can just use an at symbol to add context here or you can say something like based on my notes, how does molecular zealcast work? Click enter. And now the AI is going to go off and look through my notes and find based on the similarity score of that question I just put in there and answer as it's looking up context. So that's where we get into this section here where it's going to go through reggg. It's actually going to find the context and retrieve it and then take the prompt that I gave it. How does molecular zettocin fit in with my notes? And it's going to take that retrieve context plus my question and then generate a combined answer based on that system information. And you can also go in here and click and search for add context. So I can find for example molecular zettocassin add that in here click done and that adds the context specifically into this note and I can even click on it and see how the similarity score of that note works. So I can go through and ask a question specifically about this note now. And when I click go, it's now sending this over to Lama which is running in the background to answer the question where it's going through pulling the context of this specific note and then answering the question specifically based on the context of that note and then updating the smart environment as you just saw there. So this is truly where Rag starts to become a lot more powerful because I can ask questions about my specific notes and I can go and customize not only do I want questions about my entire vault, but I have questions about specific notes here. And if you ever want to change the settings on how this works, you can change the title at the top. You can start a new note here. We can click on the settings wheel and this will pull up the settings directly. So you can just make a change as you go. But you can see here this did actually give a pretty good answer. It summarized that entire note which was like 11,000 characters. And then it's cool because it gave some suggestions on how it could continue working on this in the background. And I can ask related questions. For example, if I say something like based on my notes, it knows to go and ground its answer based specifically on my Obsidian notes. So I can go ask questions about the molecular zettocast in theory and then I can say based on my notes what else relates to this. So it just went through scanned my entire vault based on these smart connections based on the relationships on how these notes relate to one another and it was able to give an answer based on the entire vault to ground its answer in my own knowledge. So you can see here I was bringing in concepts like the second brain parah space repetition specific tools like the templator plugin. So honestly this is working super well. I've been really impressed with this and I love that you can just do something so simple as use the at symbol to add specific notes here to reference it and use particular phrasing like based on my notes to trigger a review of your vault to give you answers. So I could play with this a lot. I recommend giving this a title. If you're ever confused at how this works and how to get going, you can always click on the help button up here and it pulls up a slideshow that explains all of the different features and everything that you can do inside of this system. So it shows you the the chat interface, how you can modify the prompts, how you can build out chat context, how you can build that chat context based on connections, you can show the links that are there, and there's just a lot of addition to the context that you can improve the AI powered retrieval that you have. So, I recommend going through this slideshow more in depth if you want to learn specifically how you can do everything here because there's a lot of features in smart connections and now smart chat that really enhance your ability to use Obsidian. And in the future, there's going to be something called inline connections where you're able to find connections not for the entire note like we saw on the side there, but also for specific paragraphs. So, that's going to be a pretty cool feature and I'm excited to get into that more once it's released to everyone. Okay, so now that you understand part three, how to set up the smart chat, we now have everything working inside of smart connections. We have the smart connections up on the side here doing the semantic similarity, these scores. We have the smart chat which can use those scores based on the smart environment to give you grounded better answers that are based on your own notes. So now let's take a look at a couple more of the advanced features and then I'm going to walk through an example of flow states and vibe coding. So, the first other major feature I'm actually going to show you is called the random note button. I think this is cool. So, if we go back to our smart connection settings here, we can see that there's an open random connection button under the ribbon icon. So, if we turn that on, it just popped up on the side here. And this is cool because if I click on this note, it opens a random connection based on this note right here. So, let's click on it. Cool. So, this just brought up readr own, which is related to the note that I just had here. Remember read write own has got a 68 score. So, by clicking on that, it randomly opened up a specific section of what was related to the smart connections note. And you can keep clicking on this as much as you want. It will keep pulling up random notes based on the note that you're already in. So, this is kind of a fun way to go through and explore your vault. And the way the random note works is you are more likely to get a randomized note based on a higher scored note. So, it's random in the sense that you don't know which one you're going to get, but it does also skew a little bit towards the most relevant notes, which makes sense because if you're jumping through and you want to see how your notes maybe randomly connect together, you probably want to find ones that are more relevant than less relevant. So, that's a cool feature. I've already showed you the hiding connections feature where you're able to rightclick and hide a specific note. And that will in the future be able to train the embedding model and the smart environment generally to understand which notes are actually relevant. So that's where you can start to bring in a bit more custom style. Though my understanding is that's not a feature yet. It just will be in the future. And then finally we get to smart blocks which is the last major feature that I want to show you before we get into the practical example. So, for this, I'm actually going to switch over to my other vault in my practical Obsidian use series where I've been walking through more on the research side and exploring concepts like flow and neuroscience and generally just how we can use different tools to not only be more productive, but be more mindfully productive. So, I talk about finding flow states on demand. And in that video, I went through much more in depth on how to build out map of content, which is what I have here. You can see OC. So, if you're interested in learning more about how this note got built and my theory behind it and where flow states come in, I recommend watching that video. But for now, I'm going to enhance this map of content by introducing a smart block. So, you can see previously I have my atoms, molecules, and alloys, which are my molecular zettocasting components. This map of content is automatically pulling in those notes using data view, which I will upgrade to bases at some point. But I can also run the command pallet, which is command or control P, and I can search for smart block. And you can see here there's this option for smart connections insert connections code block. So if I click on that it drops in the feature of the side here of the smart connections. So you can see here it's got flow zone algorithm for finding flow channels neuroscience a video I'm working on called five ways to find flow and a few other concepts here. So it embedded the smart connections view of this side directly in this note. So what's cool about that is as I change this note, I will have an ever evolving block here that will show specifically what are the most related notes to this. This is the perfect addition to a map of content because I can always try and find more notes that are related to that map of content and then work to integrate it within my actual map here to expand not only that map of meaning but also the topic clusters. So you can kind of think of a map of content is like a topic cluster that enhances the semantic similarity of the vector database so that the AI better knows oh here's an entry point for the concept of cognitive psychology like flow states. So it just really makes the intelligence of your overarching system much more powerful to build your own clusters because that's customized on what you understand to be the most effective for your own way of thinking. So now within this code block, we can see here if I click on it, this just has the three ticks smart connections and it has a specific setting organization here inside of these curly brackets. So if I click out of that, the block renders. So what's cool is you can go to settings and you can control specifically how do you want that specific code block to work. Maybe I only want the top five notes in there. So I can click five and then clicking out of it. Now it only has these five notes. So again, if I click on the block here, you can see there's now a new section called results limit five. You can customize whatever you want in these specific settings for this specific block. And now instead of having this big block on the side here in the panel for the smart connections, you have a customized block that's just sitting inside of your flow map of content. So you can do this all over your vault introduce these custom little blocks that are specifically based on the settings that you've established for that block in that note. So, I haven't done it yet, but I could imagine in the future I have my digital garden, which is my digital garden at wanderlude.xyz. So, you can see this note here is just a published version of this Obsidian note. If you're interested, I have a video that explains what is a digital garden and also specifically how I publish my Obsidian Notes website for free as part of this digital garden. So, that's what you're seeing over here. But, if we go to the actual website, we can see here that I have a glimpse of my latest growth. This is a data view table that's published from my Obsidian vault. So this data view table automatically pulls in the 10 most recent notes that have been added to my Obsidian vault and also have the property of DG publish equals true. So I could imagine inserting something like this code block where I have a code block on the bottom of every single one of my pages that has the most relevant notes connected to this note which makes it cool for you to be able to go through and click through and just explore your vault based on the semantic similarity. So, that's not a feature that's available yet, but it's something that I'm assuming will happen in the future, and I'm going to see if Brian can help me implement that. So, if you're interested in that as well, please let me know in the comments. And the more you can let me know what you're actually interested in having in smart connections, the more feedback we can give Brian so that he can build this. But now, let's actually go to the final step. I'm just going to do this really quick and high level because I know this could take a long time. But let's say I now go back to my main vault and I'm going to create a new note here. And I'm going to start calling this flow states and vibe coding. And you can see here it says check your settings. For example, the content may be less than the minimum embedding size. So it's because there's no blocks inside of here. So let's get going on it. If I just start writing some notes here, I can go back to my flow here and pull my flow definition. So I might say flow and link that topic. So I just started writing some notes here. You can see that as I've started here, I just gave the definition of flow and vibe coding and already it's producing a bunch of related notes. So that's pretty cool because now I can say like, "Oh, right. I have a note on how I vibe code." I could link that note here. So I'm expanding now the the local graph here to start building out the connections based on the most related notes. And I could take a look on the side here and be like, "Oh, right. I was listening to a podcast on Rick Rubin." So I'm going to click on that and pop it open. And maybe now I'll go through and I'll start finding specific components that are very related to what I'm talking about. And maybe this links me to another Rick Rubin note I have or some other research. You can see these are all sources on this side based on other notes that I've taken on different podcasts and videos that I've watched and it includes some of my own notes like constraining ideas into artistic expression or everyone is a creator. So I can go through now and start understanding oh okay well if I look at how I vibe code you can see I've got these preferences here on what works for me. So I could say, oh maybe something about flow states and vibe coding is how AI tools make me understand multiple languages turning me into a polyglot which is someone who understands multiple languages. So you can go through and start creating an outline based not only on the most related notes here but also specific components like here's the algorithm for finding flow channels. Maybe that's something that is super helpful for me to understand how I can maintain a flow state. So I could go back to my note and maybe I want to insert that. The more I make changes here, the more this note is going to change on the side, the more relationships are going to appear. So the idea here is that as you're writing, as you're researching, the more you begin linking and adding notes, the closer this semantic score on the side is going to become, which is going to make it more powerful for you to continue finding what you've already written inside of your vault instead of having to go search for things. So rather than using a search function, which I find is just okay, where you're just doing a keyword search, like I could search for flow, like it's going to pull up specific references to flow, it's not going to let me know, oh, this is actually the most relevant note. Here's how this one ranks compared to that one. So this is where keyword search kind of breaks down a little bit because it's only looking for flow. And you can introduce what's called fuzzy logic, but I find that it doesn't do nearly as good of a job as the semantic similarity score we have on the side here based on the smart environment. And then I could say, for example, I could go over to smart chat, create a new chat, make sure I'm on the correct model, which I am, Gemma 3. Could say, based on my notes and this specific note that I just included in here by writing the at symbol, what are some key topics I could include, be as concise as possible. So this, I would say, just took what I've already included, but I could say be more creative and see what it does. rather than just taking the four notes that I had already put in there, the four concepts, it's now going through coming up with concepts like the algorithm of effortless, the zone, and the interpreter. So, this is taking the LLM that's currently running in Olama over here, and it's now filtering its answers through my own notes to pull in specifically concepts based on what I've already included in here. So that just gives me some ideas and maybe I could be like, "Oh, I do like rather than just having the algorithm for finding flow channels. Maybe I want to introduce something called the algorithm of effortlessness." That sounds pretty cool. So I hope that this shows you how you can begin working step by step to use the semantic similarity scores on the side to help you find what you've already written to enhance what you're writing now and then use the smart chat to bounce ideas off of your own notes using a local model so that you're not sending that information to the cloud. So overall, we now have a grounded artificial intelligence. The LLM is connected to our private data for factual personalized answers, which makes it way less likely to hallucinate and also makes it far more relevant because it's related to what you're already working on and what you've already taken notes on. So you're not only grounding the AI in private data, but it's also personalizing it based on your personal knowledge management system inside of Obsidian. This makes AI much more accessible. The fact that it's private means you can be a lot more comfortable in the type of information that you're putting in your vault and how you're connecting to that large language model to answer questions to chat with your notes. It moves far beyond keywords. You're finding meaning, not just word matching. So, this is semantic understanding based on the vector database score here on the side. And as you saw, as I'm writing, as I'm changing notes, this smart connections on the side is automatically updating as we go. So, it's turning a note from some static system into something that could be considered almost like an intelligent thinking partner that I think engineers serendipity. It helps you reveal patterns and insights that you might not have discovered or would have taken way longer to discover on your own. But those insights are always grounded in your own notes anyways. So, it just helps you look through your whole vault based on this map of meaning on the side to engineer serendipity to make spontaneous connections between different notes. And finally, again, just to emphasize, this is all 100% local, assuming you're not using the cloud API and you're using something like Olama and LM Studio, which keeps all of your knowledge completely safe. And now, here's a couple quotes from my conversation with Brian that I think will help you understand not only why Smart Connections is a cool tool, but also why the person behind it has, I think, a philosophy that you might align with. And I know for myself, I like using tools where I know that the team behind it, or in this case, the person aligns with my own values because it means that the tools that they're building are going to continue aligning with my values hopefully in the future and makes me much more comfortable building this tool into my core workflow. You know, smart connections is a piece of a larger puzzle and that larger puzzle represents that flywheel concept where you know I'm I'm trying to empower myself with tools and by doing that create tools that empower others. I was really impressed not only with the tools that he was building but also the philosophy that he was putting into it with his desire to build tools for himself that could also help other people. This is a philosophy that I find myself working in as I've been vibe coding and building out these videos where I find that if I can teach myself something and then teach you, we can all learn together and then the world is a better place. So, it's kind of cool that Brian and I resonated so strongly on this flywheel effect of building things that can help not only ourselves, but also other people. And there you have it. Smart Connections takes your statically stored notes and turns them into a dynamic filter for your vault. That way, you can always go back and find the value that you have throughout your vault based on what you're most interested in working on right now. I personally use smart connections all the time. I pretty much just leave it up in my sidebar so that not only can it help me while I'm working on something, but it also reminds me of the notes that I've made in my vault in the past, which keeps them top of mind so I can keep making connections and surfacing insights between them. I highly recommend checking it out and please let me know in the comments if you do because I'm curious to learn how people use this tool more effectively. The more we can share our knowledge on how we use these tools, the more we can all learn and grow together, which is kind of the whole point of this YouTube series. A reminder to please like and subscribe if you found this video helpful. Your support really does mean a lot to me and enables me to continue making these videos, so I appreciate it very much. If you're interested in learning more about Obsidian and AI and how they integrate together, I have two playlists I recommend checking out. Obsidian and personal knowledge management and AI learning, where I weave Obsidian and AI tools together in a way that hopefully helps you find as much value in these workflows as I do. Thanks again for watching, and I will see you in the next video.

Obsidian Smart Connections UPDATE 📝 Rediscover Notes With Obsidian AI

Channel: Wanderloots

Convert Another Video

Share transcript:

Want to generate another YouTube transcript?

Enter a YouTube URL below to generate a new transcript.