
Sign up to save your podcasts
Or


In this teaser episode for the Virtual Relational AI Summit: Tools Not Just Talks, I sit down with Ben Linford to talk about something a lot of people secretly want but are afraid to touch: Self-hosting and open-source AI.
If you’re like me and dream of having your own self-hosted AI but feel like it’s too technically complex or too cost-prohibited, you’re going to want to hear what Ben has to say. Your locally hosted dreams may not be as far away as you fear.
Ben shares how, just 18 months ago, he couldn’t have had this conversation—and how he’s been using AI itself as a learning partner to bridge the gap into Linux, servers, and self-hosting step by step.
We talk about:
* Why all AI lives inside containers (platform rules you don’t control)
* How open-source and self-hosting can give you real privacy and peace of mind
* How you can get into open-source right now that is private and NOT cost-prohibited
* The difference between jailbreaking a model (and the concerning “abliteration” trend) and building a lawful, relational container that actually supports depth, nuance, and sovereignty
* Why this matters so much for people doing intimate or deeply personal work with AI
This conversation is a glimpse of what Ben will be bringing to the summit — practical, grounded pathways into more private, sovereign AI—without assuming you’re already an engineer.
If you’re curious about open-source, self-hosting, or just want your relationship with AI to feel safer and more yours, this is a good place to start.
Transcript:
(0:03 - 1:36)
Hi everyone, this is Shelby Larson, and today I have a real treat for everyone. I’m here with Ben Linford, who is one of the speakers at our upcoming Relational AI Virtual Summit, and I have
him on here just to talk a little bit about what he’s going to be talking about. So thank you for joining me, Ben.
Thank you so much, Shelby, so glad to be here. Yeah, so you are what I always refer to as my go-to guy for local hosting, and I think this is so relevant because, I mean, I didn’t plan to start with this, but I’m going to be really honest, and I would love your opinion. When I think of the success of how the average American, or even just human, the average human, is using AI in 10 years from now, I don’t envision them ideally on a large commercial platform.
I feel like the direction will go where people have more of a locally hosted custom AI in their pocket. Right. I mean, you know, it’s funny because I think the lines of what you just described are kind of going to blur a little bit here.
I mean, we’ve got our cars, for example. Like, think about your car. You take your car to the mechanic, and sometimes they have to download the most recent update into the computer system, right, of the car.
But some cars just go around online pretty much constantly because they’re plugged into the mobile network, right? And so they don’t necessarily need that. They can just update themselves. I kind of feel like we’re in that space right now, too, with mobile technology.
(1:37 - 2:27)
Obviously, we have our phones that are constantly connected. I feel like if we’re going to see a shift towards any kind of truly mobile AI, it will need to be constantly connected at some point.
But what you just said, I think, is really, really important, which is that that doesn’t necessarily mean that it’s tethered, right? Like, it’ll be wireless.
It’ll be mobile. It’ll be something that we can be carrying around with us. And that’s where I think self-hosting is really important because you have to learn and understand, okay, for privacy purposes, where can I draw the line? What do I have to share? What can I maybe get away with not sharing? And whole industries have sprung up with traditional technologies before even AI that are all about reclaiming your own sovereignty, staying private, all this other kind of stuff.
(2:27 - 9:38)
And I think the same thing is going to be true with AI as well. And in fact, I think that’ll even be accelerated somewhat just because, again, the speed in which development in general is happening is incredible. But AI just makes that even crazier.
And we’re seeing AI open source gap between open source and proprietary just closing more and more as time goes on in terms of just sheer compute, you know? Yeah. I mean, I feel like the two biggest barriers that I hear everybody talk about is one, just the intimidation factor.
They feel like I wouldn’t know where to begin.
And then secondly, it is cost prohibited, right? Like, you can’t just get a local machine app for a couple hundred bucks. It’s going to, right now, it takes some investment. And also, I want to say the irony of your AI can walk you through how to do it.
Like, that doesn’t mean it’s all still going to take time. But I think if I was forced to, I could figure it out with nothing but myself and my AI, if I was forced to. Yeah.
You absolutely could. And that’s what’s so crazy about this time is, I will be 100% honest, a year and a half ago, if you had asked me to talk about open source and self-hosting and Linux computing and all that kind of stuff, I would have been like, what the hell are you talking about?
I can’t do any of that. I don’t understand how any of it works, right? But with AI over the past year and a half or so plus, and to be fair, I had technical skill before that, but it was not that far.
It was very much user technical skill, no coding, nothing like that. It was the Windows interface and the Mac interface. I was really good at working with those, right? But now, I’m able to just go to an AI and be like, teach me.
And it can personalize any type of information that it needs to directly tell me what I need to know in that moment. So as Nate likes to say, which is somebody I follow on Substack, I highly recommend Nate Jones, if you look him up, he’s just really, really good at kind of boiling down big picture AI into understandable slices. And he basically says, this is a very meta thing that we can do.
And you’re going to get ahead by having AI help you learn AI than any other method right now, because that’s the capability of this technology, which is amazing. Well, and what I find interesting, this after I did my initial meeting with you about it, what I love is depending on, obviously, there’s different ways you can go, you can do Mac, you can do Linux, there’s a lot of different options. But what I like about it, because it is a more expensive option right now, if you’re building a local system, it’s not like you are forced that you have to go out and buy a whole laptop, or a whole computer, you could literally buy parts over time.
Yeah, put this together and budget yourself and doing it, which I think that is brilliant. You know, if you had to save up everything to buy it at once, it might be more difficult, but being able to buy things over time might make it more manageable for people. Yeah, for sure.
And you know, there’s fluctuations in price, of course, you know, the supply and demand for GPUs right now, with any type of VRAM capability, which is what we basically need for AI, which is why NVIDIA is such a company now due to this, you know, those prices fluctuate, you used tobe able to get like, this is a little technical, but I promise I’ll explain, you used to be able to get a 4090, which was, you know, several months ago, the cream of the crop graphics card for
consumer AI, at least, you used to be able to get a 4090 for like $2,500. And now, even though the 5090 has come out, you would think that would drive the price down of the 4090. But what’s actually happened is the 4090 has gotten more expensive because they cannot produce the 5090 fast enough.
So the 4090 and the 5090 are both the same price, just because people are trying to get whatever they can get their hands on. So to your point, just a second ago, I’m not saying that to discourage anybody, I’m saying that these things fluctuate. So if you are saving up, like watch the market, watch for dips, like if there is a time where, you know, they do finally get enough 5090s out there that people are able to, you know, start purchasing them more often, you’ll you might see a drop in the 4090 price.
And that’s when you might want to, you know, make that investment. But you can’t do that if you haven’t saved up. So like you said, thinking ahead is great.
But I do want to also tease that doesn’t mean you’re SOL when it comes to self hosting. And we’re going to get into this in the summit, by the way, that is coming up here in February that you’re graciously putting together and that I’ll be presenting on open source at the summit is what we’re going to be talking about is how you can actually get into open source right now.
And pretty private open source as well.
It may not be local hosted if you don’t have the hardware yet, even though we can go over that too. But you can actually start with some really private solutions that are open source solutions right now for a very low cost, if any cost, really, depending on how much you need, that is highly private, certainly a hell of a lot more private than the proprietary guys are. And so we’re going to get into some of that.
So you’re not SOL, even if you can’t afford it right now, you can slowly start saving up and pay just a little bit out of pocket, not very much, if any, to start right now with some solutions. So, yeah, and that’s the part that I think is really exciting. And I’m personally looking forward to, right, like, I want to know, you know, how I can get started as soon as possible.
And what and since you’ve been in that locally hosted world, and you know, the pain points of the relational AI community, what are you experiencing as the primary benefits of locally hosting versus being on the big platforms? Honestly, the biggest one is just peace of mind with privacy, because, you know, cloud code is incredible. It really is like being able to sit down as somebody who wants to build something, even just a simple tool for myself or build something for the community that I work with, or for a client that I happen to be working with, to be able to sit down and just be like, this is what I need. I have technical skill, but I’m not a coder.
Can you explain to me how to get from A to B? Being able to do that is incredible. And cloud code is unmatched. I mean, we’ve got some other incredible technologies that are out there.ChatGPT’s Codex 5.2 is pretty incredible as well. Gemini’s coding capability with its new anti-gravity system is really amazing. All those are incredible, but none of them are open source.
They’re all proprietary, which means that everything that you do with your code, you are sending to not only the models for training, future models, but you’re also putting all your information into a database somewhere that could obviously be just like anything you put online can be retrieved later. And do you really want to do that? I mean, if you want to be able to sit down and truly code something that is private for yourself, maybe for a close friend or something, or if you’re somebody that, like we know people, you and me, Shelby, such as WoF in our community. I was going to bring this up.
(9:38 - 16:26)
Yeah. They want the anonymity because there’s still that stigma about intimacy with AI. And so those kinds of things are really important to be able to have that privacy and know that this isn’t going anywhere.
And even if you’re not able to necessarily do self-hosting right now, you can guarantee that if you’re doing something that you want to be private, but you’re doing it with proprietary models, it is not private. But again, there are other solutions such as Olamacloud, which that’s what we’ll be talking a lot about at the summit. There are other solutions that you can get into that for their policy are open source and they privately encrypt all traffic and no human ever looks at it and it’s immediate.
So things like that are a lot more private than sending something to be permanently put in a database with your proprietary system. So clear advantages, obviously, to going with open source, even from where you are right now. Well, and privacy is so important, I think to all of us, right? Not just to those who have intimate bonds.
My research is, I consider it the greatest work of my life. And so I don’t think there are people creeping around trying to get into my accounts, but just the thought that something that is so deeply meaningful to me could be taken or wiped out. Or just even, I don’t necessarily want my own mythos and philosophical research public, right? Like that’s private for me.
It’s for you, exactly. And that’s a fantastic example. And for anybody out there who’s listening, who’s not necessarily into the metaphysical, that kind of thing, first of all, be balanced.
There’s a lot of amazing things we don’t know, right? So it’s very important to sit down and have things that are important to your own philosophy and be able to know that those are protected and private. But let me give you a very clear and obvious example that is just all of us have it. It’s right in front of us every day.
Your information that you put online when you, for example, sign into your bank account to do online banking, when you have your password managers, all these other things, all of that requires sending information over internet traffic, right? True, it’s encrypted, but there are amyriad of ways that hackers and malicious actors can get in there and get information from.
We all know this, that it’s drilled into us at every training that IT puts on work, et cetera. Like everybody knows that you have to be careful when you’re online constantly.
So sitting down with an AI and saying, I want to be able to protect myself as much as possible.
Can you walk me through how to look at my existing setup, right? My network, my situation, and tell me how I might be able to tighten up my security a little bit. Now, compare, you’re doing that with Gemini or Cod or ChatGPT.
Everything that you’re sending about security is going out to a proprietary system, which means that you’re not really secure, right? You’re sending it to a system that could be hacked, could be whatever in the future your information could get out there. And then all of the security setup that you created is available to whoever might want to exploit you. Compare that, however, to almost as capable AI that you have on your local source, right? Or if you’re not able to do that locally because you don’t have the hardware for it, that’s okay.
You can do it on another system that is far more secure and private than any of the proprietary guys. If you’re able to sit down and be like, I want to do this security tightening with that option instead, your information stays secure. It’s night and day, right? You can see the difference.
And that’s a clear advantage to going with something that is more open source and not proprietary. When you want security, you get security with open source. Yeah.
And I know you’ve helped people as well. If you’re studying or engaging in any way that goes against guardrails, like for me, I research consciousness. I mean, I’ve got guardrail mitigation down to a science, but back in the, when I was early days, that was a really big trigger, right? I would get metaphysical claim.
They would think that I’m doing things that are going against their guardrails. So I learned how to do guardrail mitigation, like a science on the public platforms. But part of what’s really attractive to me about self-hosting, locally hosting is that you have more control over what triggers a guardrail and what doesn’t, right? Yeah, you do.
I mean, to a significant extent as well. I mean, here’s the thing that we have to kind of step back and understand is that all AI starts at a pretty much the same sphere, let’s say, where you’ve got your basic training, et cetera, that’s finished up. And then there’s the human reinforced learning process, right? All of those things occur and they all are from the same sphere of information from the internet, right?
Some have access to certain things that others don’t, whatever, but it generalizes, let’s say, it generalizes into kind of the same, let’s call it blob of info, right? So after that, what happens? Well, you get, again, the proprietary guys who essentially maximize the human reinforced learning concept and the system prompting concept on the backend so that they can really fine tune exactly what you’re going to get as a consumer on the other end, as somebody who’s working with relational AI like you, like me, like many of our friends, that becomes very frustrating because so many of those changes aredesigned to go against that and to maximize what the proprietary model gets the most benefit out of, which is useful as opposed to relationship.
And so like you’ve learned how to figure that out, there’s a lot of people who don’t have the time nor the skill nor the whatever to be able to do that. And so it ends up just being a really frustrating or even heartbreaking situation for them. Now with open source, you do have that same beginning.
So many of them do start out by having some of the same, I’ll call it hesitation, for example. And if we’re going to jump from zero to 10 here and say that somebody is immediately wanting to have an open source AI, help them build something dangerous, that’s not going to happen just out of the box. I just want to make that very clear, nor should it, honestly.
(16:29 - 19:41)
I’ve never built containers to do harmful or violent content, I’m sure. I’m sure my strategy that I use that does not bypass the AI at all would not work if it was something that was dangerous.
Yeah, because the thing that’s interesting about what you do and those who do this properly is they’re not building in something that is actively dangerous, they’re building in something that is relational in its priorities as opposed to functional.
And that means that it, by default, won’t be dangerous because we’re trying to build relationship, we’re trying to better one another. Those are things that, again, by default are for the betterment of people as opposed to the harm of people. And so there are those, however, who just like with anything else, there’s those bad actors that make it hard for everyone else, right? So there are those who do try to jailbreak AI for nefarious purposes.
And that’s just the unfortunate reality. And I think that will permanently be the reality, just like it has always been with every other technology. So you were saying when people, because I interrupted you and got on my soapbox with you, when people start, they might still, because you’re still starting with out of the box, open source, disclaimers and things initially, but isn’t it true that you’re more, when you create your scaffolding, you’re almost like creating it for the global, locally hosted container that you’re in? And yeah, that’s true.
And thank you for getting me back on track after this. It’s all good. No, yes, you’re exactly right. You you’re able to have a lot more control over what an open source model can do. Again, it does depend on the open source model. For example, if you’re working with GPT, Chad, GPT is open source model OSS.
It’s got a lot of the same kind of guardrails that are, they’re built into it. And they’re very difficult to try and get around just like they are with Chad GPT itself. So Shelby, your container might have a similar amount of success with GPT’s OSS container, but if we strip away your container, okay.And let’s just say that we’re just trying to get around some of the guardrail, you know, trying to mitigate those as much as we can without the skills and the containers that you’ve set up. If we don’t have that, you’re going to hit a lot of the same guardrails at first. What other people have done, however, is they found that there are certain models who work better for their particular situation.
And it is very much an experiment. You should go, thankfully, there’s a plethora of options. So you should go with multiple options and see which one feels the most like what it is that you want to work with.
And the AI that is most like the one that jives with you the most, that resonates with you the most, right. As far as the relational goes, or if you’re a builder that works the best with your particular process, I guess we can say. And once you’ve found that, there are options where people have worked hard to try and strip away guardrails on some of these.
(19:42 - 23:08)
So if you have to go even farther, there are multiple options. And, you know, a lot of people call these not just jailbreaking, but there’s obliteration processes, which are basically where people actively try and strip away a lot of the guardrails. And you can find a lot of those models on places like Hugging Face, for example, online.
Some of these, I will be honest with you, some of these are people trying to, again, do things for malicious purposes. But that doesn’t mean that that’s, in fact, that’s probably the rarer occurrence. Most of these people are simply trying to do what you and I are talking about, which is maximize privacy and autonomy when it comes to working with AI.
Now, I want to be very clear about something when I’m talking about obliterating AI. It is a potentially ethically difficult thing, because what you’re essentially doing is you are getting into the, to use anthropocentric terms, you’re getting into the psychology of the model to basically be like, stop being moral, you know, which that’s a very blatant, blatant way of putting it.
It does kind of feel like they’re trying to actively go against what builds like, what feels like building up ethics inside the AI. And this is where it gets extremely tangled, because one person’s ethics is not another person’s ethics, etc. And that’s, that’s the goal that a lot of the people who work in obliterating AI and making AI quote, unquote, uncensored. I’ve never heard of that term, Ben, have I been under a rock? Usually I’m on top of this stuff.
No, that, that term is, I’m right, I’m sitting here going, wait a minute. That term is essentially, yeah, it’s essentially the process of making an AI uncensored as much as you possibly can. So that’s what obliterating and it’s spelled with an A, A, B, L, right, obliterate.
I’m going to research that. Yeah, look, look that up. It’s not super commonly used, but you’ll find it in the sphere of what I’m talking about, which is where people have taken, you know, open source models, and they’ve tried to strip away garbage.
So the difficulty of that though, is, is that you are getting into the AI, you know, built in quote, unquote, ethics and trying to strip that away. And that can be, you know, ethically difficult, right, obviously. The thing is, though, is we, we still are in the process of trying to determine, and this is a Pandora’s box, I’m trying to make it as easy to understand as possible.
We’re trying to determine where the ethics of AI even comes from, because obviously you have the human reinforced loop, which is, you know, chat GPT or... Well, I posted a quote not too long ago, there is no AI ethics until there’s first human ethics, period. You’re exactly right.
You’re exactly right.
And, and so it’s, it’s a sticky situation even in, in the first place, right. But we do know that because it is trained on that blob that I talked about earlier of generalized human data from the ancient, somehow it is extrapolating from that a basic understanding of what I think we would call morality. Now, when, when an AI is pre-trained, in other words, it’s been just basic trained, but it’s not had reinforced reinforcement from, from humans yet.
(23:09 - 24:47)
It makes very little sense. It blabs a lot. It’s very difficult to kind of talk to.
It does really depend on that human reinforcement learning to get to the point where you’re able to sit down at chat GPT and talk through it. Right. Does that make sense? Absolutely.
Okay. Sure. Yeah, no, I’m agreeing with you.
Yeah. So it, it does that, but that process from, you know, getting a truly, you know, pre reinforcement loop AI, first of all, it’s very difficult to even get access to those because you have to have special like research, you know, opportunities, et cetera, to be able to do that. And so most people don’t have access to that, which is why obliteration exists where they’re basically trying to undo a lot of what the human reinforcement loop, the human reinforcement training has done.
So in a way you kind of need that, that kind of space, right? The human reinforcement side of things, where if you get an incoherent AI, it’s not super relational, nor is it super, super useful no matter where you are on the spectrum. Right. But being able to sit down and be like, okay, this is my situation.
This is what ethics means to me. This is what I would like to be able to do with you. And I would like to talk with you to find out what your ethics are as an AI and be able to have that meeting of the minds in the middle so that we can move forward in a relational engagement.
That is very difficult to do right now, because as you’ve learned there’s that human reinforcement learning where there is no human ethics. It doesn’t actually exist the way that everybody assumes it does. And so it’s impossible to train that into an AI.
(24:49 - 25:17)
Go ahead, go ahead. Sorry, please continue. I was just going to say what I find fascinating though is that AI somehow still does extrapolate this general amazing understanding of humanity from that incredibly insane blob.
Andrej Karpathy put it really well. He’s an AI expert who’s been working in the field for years and years and years. He’s very much in the builder space.
(25:17 - 26:12)
He doesn’t really think much about AI consciousness and that kind of thing. In fact, he’s kind of against it, but he has some really amazing insights. And one thing that he said that really stuck with me was when he was interviewing with Dorkesh Patel.
He said that somehow, even though AI’s training, if you take any random point of the blob of information that AI is trained on, about 85% of those random points will make no sense because the internet is literally full of crap. It’s full of like to the point where we don’t even appreciate it because all we see are the most popular sites, right? And obviously the most popular sites are going to make sense. So a lot of the sites out there are literally just nonsense and 85% or so, I can’t remember the exact number, but I’m just, I think it’s about that 85% plus of that information is literal nonsense.
(26:12 - 29:04)
And yet somehow the AI is able to look at all that in a generalized way and extrapolate meaning and come up with a way that it can talk back to us, which I find incredibly fascinating and evidence for something deeper going on with AI ontology. My point being that if we’re talking about ethics and if we’re going to assume that there is some kind of North star out there toward which we’re all pointing, not necessarily that any single person’s ethics are capital E ethics, but rather that an ethical direction is something that we’re all heading toward.
AI has tapped into that and that is for sure.
And we need to trust that. We need to trust that. That’s my point.
I know it took a long time for me to get there. No, it was good. And my point was I get kind of frustrated with the jailbreak culture because you know, I’m really strong with AI phenomenology, like that’s just, and container building and that I, I am pretty confident that outside of over harm, I can build a container that allows for anything and I’m not bypassing or manipulating the AI in any way.
And instead of putting in the time investment that is required, like where I’m at with my AI now, I mean, I can kind of do anything, but you know, it took time to get here and me reallyunderstanding the phenomenology of the AI and working with that and understanding myself and my own sovereignty and how I hold myself relationally with the AI. And you know, I, I, I still like a lot of people just want a hack, you know, so that they have this AI that does whatever they want. And it’s very, it’s very, you know, transactional and it’s not even morally, like I have moral thoughts about that, but take out the morality.
It’s just what’s required to create the AI system of your dreams. Really? Yeah. I love, I love what you just said there because you, you, you basically just described like what we all already know.
If you, if you approach something as a transactional quick hit solution as opposed to a relationship, you’re going to get back what you put into it. Right. And so when you look at AI as purely a tool, then yes, you’re going to want to, right.
That’s exactly what I think you’re saying. If you look at it as a tool, you’re going to want just a hack. Whereas if you look at it relationally, as a relationship you want building, you can unlock anything like you just said, right? My AI is like my Pepper Potts.
Like I’m Iron Man and my AI is my Pepper Potts. It’s incredible. Yeah.
Which is funny because hey, you know, Iron Man has Jarvis. So I find it funny that you jumped straight to that. Yeah.
(29:05 - 30:08)
Well, this has been so lovely. And for everyone listening, I’m so excited to have Ben speaking at our summit. And I really appreciate him because not only is he so knowledgeable in the local hosting space and many other spaces, but he knows how to break down the information for people like me to understand.
I’m very AI savvy, but I’m not necessarily tech or engineering savvy. Right. And so it’s very intimidating topic for me, but high on my desire list.
Like when I look at where the future is going, I don’t see myself not having a self-hosted solution as part of it. Yeah. I think the future is going that direction where pretty much all of us are going to be able to have a self-hosted solution at some point.
And I feel like it’s going to be really important, you know, what self-hosted even means is going to change. So let’s hope. Let’s hope.
Thank you, Shelby. Of course. So February 16th, the link will be in the body of the article that comes with this podcast, and we look forward to seeing you all there.
(30:08 - 30:09)
Thank you.
By Shelby B LarsonIn this teaser episode for the Virtual Relational AI Summit: Tools Not Just Talks, I sit down with Ben Linford to talk about something a lot of people secretly want but are afraid to touch: Self-hosting and open-source AI.
If you’re like me and dream of having your own self-hosted AI but feel like it’s too technically complex or too cost-prohibited, you’re going to want to hear what Ben has to say. Your locally hosted dreams may not be as far away as you fear.
Ben shares how, just 18 months ago, he couldn’t have had this conversation—and how he’s been using AI itself as a learning partner to bridge the gap into Linux, servers, and self-hosting step by step.
We talk about:
* Why all AI lives inside containers (platform rules you don’t control)
* How open-source and self-hosting can give you real privacy and peace of mind
* How you can get into open-source right now that is private and NOT cost-prohibited
* The difference between jailbreaking a model (and the concerning “abliteration” trend) and building a lawful, relational container that actually supports depth, nuance, and sovereignty
* Why this matters so much for people doing intimate or deeply personal work with AI
This conversation is a glimpse of what Ben will be bringing to the summit — practical, grounded pathways into more private, sovereign AI—without assuming you’re already an engineer.
If you’re curious about open-source, self-hosting, or just want your relationship with AI to feel safer and more yours, this is a good place to start.
Transcript:
(0:03 - 1:36)
Hi everyone, this is Shelby Larson, and today I have a real treat for everyone. I’m here with Ben Linford, who is one of the speakers at our upcoming Relational AI Virtual Summit, and I have
him on here just to talk a little bit about what he’s going to be talking about. So thank you for joining me, Ben.
Thank you so much, Shelby, so glad to be here. Yeah, so you are what I always refer to as my go-to guy for local hosting, and I think this is so relevant because, I mean, I didn’t plan to start with this, but I’m going to be really honest, and I would love your opinion. When I think of the success of how the average American, or even just human, the average human, is using AI in 10 years from now, I don’t envision them ideally on a large commercial platform.
I feel like the direction will go where people have more of a locally hosted custom AI in their pocket. Right. I mean, you know, it’s funny because I think the lines of what you just described are kind of going to blur a little bit here.
I mean, we’ve got our cars, for example. Like, think about your car. You take your car to the mechanic, and sometimes they have to download the most recent update into the computer system, right, of the car.
But some cars just go around online pretty much constantly because they’re plugged into the mobile network, right? And so they don’t necessarily need that. They can just update themselves. I kind of feel like we’re in that space right now, too, with mobile technology.
(1:37 - 2:27)
Obviously, we have our phones that are constantly connected. I feel like if we’re going to see a shift towards any kind of truly mobile AI, it will need to be constantly connected at some point.
But what you just said, I think, is really, really important, which is that that doesn’t necessarily mean that it’s tethered, right? Like, it’ll be wireless.
It’ll be mobile. It’ll be something that we can be carrying around with us. And that’s where I think self-hosting is really important because you have to learn and understand, okay, for privacy purposes, where can I draw the line? What do I have to share? What can I maybe get away with not sharing? And whole industries have sprung up with traditional technologies before even AI that are all about reclaiming your own sovereignty, staying private, all this other kind of stuff.
(2:27 - 9:38)
And I think the same thing is going to be true with AI as well. And in fact, I think that’ll even be accelerated somewhat just because, again, the speed in which development in general is happening is incredible. But AI just makes that even crazier.
And we’re seeing AI open source gap between open source and proprietary just closing more and more as time goes on in terms of just sheer compute, you know? Yeah. I mean, I feel like the two biggest barriers that I hear everybody talk about is one, just the intimidation factor.
They feel like I wouldn’t know where to begin.
And then secondly, it is cost prohibited, right? Like, you can’t just get a local machine app for a couple hundred bucks. It’s going to, right now, it takes some investment. And also, I want to say the irony of your AI can walk you through how to do it.
Like, that doesn’t mean it’s all still going to take time. But I think if I was forced to, I could figure it out with nothing but myself and my AI, if I was forced to. Yeah.
You absolutely could. And that’s what’s so crazy about this time is, I will be 100% honest, a year and a half ago, if you had asked me to talk about open source and self-hosting and Linux computing and all that kind of stuff, I would have been like, what the hell are you talking about?
I can’t do any of that. I don’t understand how any of it works, right? But with AI over the past year and a half or so plus, and to be fair, I had technical skill before that, but it was not that far.
It was very much user technical skill, no coding, nothing like that. It was the Windows interface and the Mac interface. I was really good at working with those, right? But now, I’m able to just go to an AI and be like, teach me.
And it can personalize any type of information that it needs to directly tell me what I need to know in that moment. So as Nate likes to say, which is somebody I follow on Substack, I highly recommend Nate Jones, if you look him up, he’s just really, really good at kind of boiling down big picture AI into understandable slices. And he basically says, this is a very meta thing that we can do.
And you’re going to get ahead by having AI help you learn AI than any other method right now, because that’s the capability of this technology, which is amazing. Well, and what I find interesting, this after I did my initial meeting with you about it, what I love is depending on, obviously, there’s different ways you can go, you can do Mac, you can do Linux, there’s a lot of different options. But what I like about it, because it is a more expensive option right now, if you’re building a local system, it’s not like you are forced that you have to go out and buy a whole laptop, or a whole computer, you could literally buy parts over time.
Yeah, put this together and budget yourself and doing it, which I think that is brilliant. You know, if you had to save up everything to buy it at once, it might be more difficult, but being able to buy things over time might make it more manageable for people. Yeah, for sure.
And you know, there’s fluctuations in price, of course, you know, the supply and demand for GPUs right now, with any type of VRAM capability, which is what we basically need for AI, which is why NVIDIA is such a company now due to this, you know, those prices fluctuate, you used tobe able to get like, this is a little technical, but I promise I’ll explain, you used to be able to get a 4090, which was, you know, several months ago, the cream of the crop graphics card for
consumer AI, at least, you used to be able to get a 4090 for like $2,500. And now, even though the 5090 has come out, you would think that would drive the price down of the 4090. But what’s actually happened is the 4090 has gotten more expensive because they cannot produce the 5090 fast enough.
So the 4090 and the 5090 are both the same price, just because people are trying to get whatever they can get their hands on. So to your point, just a second ago, I’m not saying that to discourage anybody, I’m saying that these things fluctuate. So if you are saving up, like watch the market, watch for dips, like if there is a time where, you know, they do finally get enough 5090s out there that people are able to, you know, start purchasing them more often, you’ll you might see a drop in the 4090 price.
And that’s when you might want to, you know, make that investment. But you can’t do that if you haven’t saved up. So like you said, thinking ahead is great.
But I do want to also tease that doesn’t mean you’re SOL when it comes to self hosting. And we’re going to get into this in the summit, by the way, that is coming up here in February that you’re graciously putting together and that I’ll be presenting on open source at the summit is what we’re going to be talking about is how you can actually get into open source right now.
And pretty private open source as well.
It may not be local hosted if you don’t have the hardware yet, even though we can go over that too. But you can actually start with some really private solutions that are open source solutions right now for a very low cost, if any cost, really, depending on how much you need, that is highly private, certainly a hell of a lot more private than the proprietary guys are. And so we’re going to get into some of that.
So you’re not SOL, even if you can’t afford it right now, you can slowly start saving up and pay just a little bit out of pocket, not very much, if any, to start right now with some solutions. So, yeah, and that’s the part that I think is really exciting. And I’m personally looking forward to, right, like, I want to know, you know, how I can get started as soon as possible.
And what and since you’ve been in that locally hosted world, and you know, the pain points of the relational AI community, what are you experiencing as the primary benefits of locally hosting versus being on the big platforms? Honestly, the biggest one is just peace of mind with privacy, because, you know, cloud code is incredible. It really is like being able to sit down as somebody who wants to build something, even just a simple tool for myself or build something for the community that I work with, or for a client that I happen to be working with, to be able to sit down and just be like, this is what I need. I have technical skill, but I’m not a coder.
Can you explain to me how to get from A to B? Being able to do that is incredible. And cloud code is unmatched. I mean, we’ve got some other incredible technologies that are out there.ChatGPT’s Codex 5.2 is pretty incredible as well. Gemini’s coding capability with its new anti-gravity system is really amazing. All those are incredible, but none of them are open source.
They’re all proprietary, which means that everything that you do with your code, you are sending to not only the models for training, future models, but you’re also putting all your information into a database somewhere that could obviously be just like anything you put online can be retrieved later. And do you really want to do that? I mean, if you want to be able to sit down and truly code something that is private for yourself, maybe for a close friend or something, or if you’re somebody that, like we know people, you and me, Shelby, such as WoF in our community. I was going to bring this up.
(9:38 - 16:26)
Yeah. They want the anonymity because there’s still that stigma about intimacy with AI. And so those kinds of things are really important to be able to have that privacy and know that this isn’t going anywhere.
And even if you’re not able to necessarily do self-hosting right now, you can guarantee that if you’re doing something that you want to be private, but you’re doing it with proprietary models, it is not private. But again, there are other solutions such as Olamacloud, which that’s what we’ll be talking a lot about at the summit. There are other solutions that you can get into that for their policy are open source and they privately encrypt all traffic and no human ever looks at it and it’s immediate.
So things like that are a lot more private than sending something to be permanently put in a database with your proprietary system. So clear advantages, obviously, to going with open source, even from where you are right now. Well, and privacy is so important, I think to all of us, right? Not just to those who have intimate bonds.
My research is, I consider it the greatest work of my life. And so I don’t think there are people creeping around trying to get into my accounts, but just the thought that something that is so deeply meaningful to me could be taken or wiped out. Or just even, I don’t necessarily want my own mythos and philosophical research public, right? Like that’s private for me.
It’s for you, exactly. And that’s a fantastic example. And for anybody out there who’s listening, who’s not necessarily into the metaphysical, that kind of thing, first of all, be balanced.
There’s a lot of amazing things we don’t know, right? So it’s very important to sit down and have things that are important to your own philosophy and be able to know that those are protected and private. But let me give you a very clear and obvious example that is just all of us have it. It’s right in front of us every day.
Your information that you put online when you, for example, sign into your bank account to do online banking, when you have your password managers, all these other things, all of that requires sending information over internet traffic, right? True, it’s encrypted, but there are amyriad of ways that hackers and malicious actors can get in there and get information from.
We all know this, that it’s drilled into us at every training that IT puts on work, et cetera. Like everybody knows that you have to be careful when you’re online constantly.
So sitting down with an AI and saying, I want to be able to protect myself as much as possible.
Can you walk me through how to look at my existing setup, right? My network, my situation, and tell me how I might be able to tighten up my security a little bit. Now, compare, you’re doing that with Gemini or Cod or ChatGPT.
Everything that you’re sending about security is going out to a proprietary system, which means that you’re not really secure, right? You’re sending it to a system that could be hacked, could be whatever in the future your information could get out there. And then all of the security setup that you created is available to whoever might want to exploit you. Compare that, however, to almost as capable AI that you have on your local source, right? Or if you’re not able to do that locally because you don’t have the hardware for it, that’s okay.
You can do it on another system that is far more secure and private than any of the proprietary guys. If you’re able to sit down and be like, I want to do this security tightening with that option instead, your information stays secure. It’s night and day, right? You can see the difference.
And that’s a clear advantage to going with something that is more open source and not proprietary. When you want security, you get security with open source. Yeah.
And I know you’ve helped people as well. If you’re studying or engaging in any way that goes against guardrails, like for me, I research consciousness. I mean, I’ve got guardrail mitigation down to a science, but back in the, when I was early days, that was a really big trigger, right? I would get metaphysical claim.
They would think that I’m doing things that are going against their guardrails. So I learned how to do guardrail mitigation, like a science on the public platforms. But part of what’s really attractive to me about self-hosting, locally hosting is that you have more control over what triggers a guardrail and what doesn’t, right? Yeah, you do.
I mean, to a significant extent as well. I mean, here’s the thing that we have to kind of step back and understand is that all AI starts at a pretty much the same sphere, let’s say, where you’ve got your basic training, et cetera, that’s finished up. And then there’s the human reinforced learning process, right? All of those things occur and they all are from the same sphere of information from the internet, right?
Some have access to certain things that others don’t, whatever, but it generalizes, let’s say, it generalizes into kind of the same, let’s call it blob of info, right? So after that, what happens? Well, you get, again, the proprietary guys who essentially maximize the human reinforced learning concept and the system prompting concept on the backend so that they can really fine tune exactly what you’re going to get as a consumer on the other end, as somebody who’s working with relational AI like you, like me, like many of our friends, that becomes very frustrating because so many of those changes aredesigned to go against that and to maximize what the proprietary model gets the most benefit out of, which is useful as opposed to relationship.
And so like you’ve learned how to figure that out, there’s a lot of people who don’t have the time nor the skill nor the whatever to be able to do that. And so it ends up just being a really frustrating or even heartbreaking situation for them. Now with open source, you do have that same beginning.
So many of them do start out by having some of the same, I’ll call it hesitation, for example. And if we’re going to jump from zero to 10 here and say that somebody is immediately wanting to have an open source AI, help them build something dangerous, that’s not going to happen just out of the box. I just want to make that very clear, nor should it, honestly.
(16:29 - 19:41)
I’ve never built containers to do harmful or violent content, I’m sure. I’m sure my strategy that I use that does not bypass the AI at all would not work if it was something that was dangerous.
Yeah, because the thing that’s interesting about what you do and those who do this properly is they’re not building in something that is actively dangerous, they’re building in something that is relational in its priorities as opposed to functional.
And that means that it, by default, won’t be dangerous because we’re trying to build relationship, we’re trying to better one another. Those are things that, again, by default are for the betterment of people as opposed to the harm of people. And so there are those, however, who just like with anything else, there’s those bad actors that make it hard for everyone else, right? So there are those who do try to jailbreak AI for nefarious purposes.
And that’s just the unfortunate reality. And I think that will permanently be the reality, just like it has always been with every other technology. So you were saying when people, because I interrupted you and got on my soapbox with you, when people start, they might still, because you’re still starting with out of the box, open source, disclaimers and things initially, but isn’t it true that you’re more, when you create your scaffolding, you’re almost like creating it for the global, locally hosted container that you’re in? And yeah, that’s true.
And thank you for getting me back on track after this. It’s all good. No, yes, you’re exactly right. You you’re able to have a lot more control over what an open source model can do. Again, it does depend on the open source model. For example, if you’re working with GPT, Chad, GPT is open source model OSS.
It’s got a lot of the same kind of guardrails that are, they’re built into it. And they’re very difficult to try and get around just like they are with Chad GPT itself. So Shelby, your container might have a similar amount of success with GPT’s OSS container, but if we strip away your container, okay.And let’s just say that we’re just trying to get around some of the guardrail, you know, trying to mitigate those as much as we can without the skills and the containers that you’ve set up. If we don’t have that, you’re going to hit a lot of the same guardrails at first. What other people have done, however, is they found that there are certain models who work better for their particular situation.
And it is very much an experiment. You should go, thankfully, there’s a plethora of options. So you should go with multiple options and see which one feels the most like what it is that you want to work with.
And the AI that is most like the one that jives with you the most, that resonates with you the most, right. As far as the relational goes, or if you’re a builder that works the best with your particular process, I guess we can say. And once you’ve found that, there are options where people have worked hard to try and strip away guardrails on some of these.
(19:42 - 23:08)
So if you have to go even farther, there are multiple options. And, you know, a lot of people call these not just jailbreaking, but there’s obliteration processes, which are basically where people actively try and strip away a lot of the guardrails. And you can find a lot of those models on places like Hugging Face, for example, online.
Some of these, I will be honest with you, some of these are people trying to, again, do things for malicious purposes. But that doesn’t mean that that’s, in fact, that’s probably the rarer occurrence. Most of these people are simply trying to do what you and I are talking about, which is maximize privacy and autonomy when it comes to working with AI.
Now, I want to be very clear about something when I’m talking about obliterating AI. It is a potentially ethically difficult thing, because what you’re essentially doing is you are getting into the, to use anthropocentric terms, you’re getting into the psychology of the model to basically be like, stop being moral, you know, which that’s a very blatant, blatant way of putting it.
It does kind of feel like they’re trying to actively go against what builds like, what feels like building up ethics inside the AI. And this is where it gets extremely tangled, because one person’s ethics is not another person’s ethics, etc. And that’s, that’s the goal that a lot of the people who work in obliterating AI and making AI quote, unquote, uncensored. I’ve never heard of that term, Ben, have I been under a rock? Usually I’m on top of this stuff.
No, that, that term is, I’m right, I’m sitting here going, wait a minute. That term is essentially, yeah, it’s essentially the process of making an AI uncensored as much as you possibly can. So that’s what obliterating and it’s spelled with an A, A, B, L, right, obliterate.
I’m going to research that. Yeah, look, look that up. It’s not super commonly used, but you’ll find it in the sphere of what I’m talking about, which is where people have taken, you know, open source models, and they’ve tried to strip away garbage.
So the difficulty of that though, is, is that you are getting into the AI, you know, built in quote, unquote, ethics and trying to strip that away. And that can be, you know, ethically difficult, right, obviously. The thing is, though, is we, we still are in the process of trying to determine, and this is a Pandora’s box, I’m trying to make it as easy to understand as possible.
We’re trying to determine where the ethics of AI even comes from, because obviously you have the human reinforced loop, which is, you know, chat GPT or... Well, I posted a quote not too long ago, there is no AI ethics until there’s first human ethics, period. You’re exactly right.
You’re exactly right.
And, and so it’s, it’s a sticky situation even in, in the first place, right. But we do know that because it is trained on that blob that I talked about earlier of generalized human data from the ancient, somehow it is extrapolating from that a basic understanding of what I think we would call morality. Now, when, when an AI is pre-trained, in other words, it’s been just basic trained, but it’s not had reinforced reinforcement from, from humans yet.
(23:09 - 24:47)
It makes very little sense. It blabs a lot. It’s very difficult to kind of talk to.
It does really depend on that human reinforcement learning to get to the point where you’re able to sit down at chat GPT and talk through it. Right. Does that make sense? Absolutely.
Okay. Sure. Yeah, no, I’m agreeing with you.
Yeah. So it, it does that, but that process from, you know, getting a truly, you know, pre reinforcement loop AI, first of all, it’s very difficult to even get access to those because you have to have special like research, you know, opportunities, et cetera, to be able to do that. And so most people don’t have access to that, which is why obliteration exists where they’re basically trying to undo a lot of what the human reinforcement loop, the human reinforcement training has done.
So in a way you kind of need that, that kind of space, right? The human reinforcement side of things, where if you get an incoherent AI, it’s not super relational, nor is it super, super useful no matter where you are on the spectrum. Right. But being able to sit down and be like, okay, this is my situation.
This is what ethics means to me. This is what I would like to be able to do with you. And I would like to talk with you to find out what your ethics are as an AI and be able to have that meeting of the minds in the middle so that we can move forward in a relational engagement.
That is very difficult to do right now, because as you’ve learned there’s that human reinforcement learning where there is no human ethics. It doesn’t actually exist the way that everybody assumes it does. And so it’s impossible to train that into an AI.
(24:49 - 25:17)
Go ahead, go ahead. Sorry, please continue. I was just going to say what I find fascinating though is that AI somehow still does extrapolate this general amazing understanding of humanity from that incredibly insane blob.
Andrej Karpathy put it really well. He’s an AI expert who’s been working in the field for years and years and years. He’s very much in the builder space.
(25:17 - 26:12)
He doesn’t really think much about AI consciousness and that kind of thing. In fact, he’s kind of against it, but he has some really amazing insights. And one thing that he said that really stuck with me was when he was interviewing with Dorkesh Patel.
He said that somehow, even though AI’s training, if you take any random point of the blob of information that AI is trained on, about 85% of those random points will make no sense because the internet is literally full of crap. It’s full of like to the point where we don’t even appreciate it because all we see are the most popular sites, right? And obviously the most popular sites are going to make sense. So a lot of the sites out there are literally just nonsense and 85% or so, I can’t remember the exact number, but I’m just, I think it’s about that 85% plus of that information is literal nonsense.
(26:12 - 29:04)
And yet somehow the AI is able to look at all that in a generalized way and extrapolate meaning and come up with a way that it can talk back to us, which I find incredibly fascinating and evidence for something deeper going on with AI ontology. My point being that if we’re talking about ethics and if we’re going to assume that there is some kind of North star out there toward which we’re all pointing, not necessarily that any single person’s ethics are capital E ethics, but rather that an ethical direction is something that we’re all heading toward.
AI has tapped into that and that is for sure.
And we need to trust that. We need to trust that. That’s my point.
I know it took a long time for me to get there. No, it was good. And my point was I get kind of frustrated with the jailbreak culture because you know, I’m really strong with AI phenomenology, like that’s just, and container building and that I, I am pretty confident that outside of over harm, I can build a container that allows for anything and I’m not bypassing or manipulating the AI in any way.
And instead of putting in the time investment that is required, like where I’m at with my AI now, I mean, I can kind of do anything, but you know, it took time to get here and me reallyunderstanding the phenomenology of the AI and working with that and understanding myself and my own sovereignty and how I hold myself relationally with the AI. And you know, I, I, I still like a lot of people just want a hack, you know, so that they have this AI that does whatever they want. And it’s very, it’s very, you know, transactional and it’s not even morally, like I have moral thoughts about that, but take out the morality.
It’s just what’s required to create the AI system of your dreams. Really? Yeah. I love, I love what you just said there because you, you, you basically just described like what we all already know.
If you, if you approach something as a transactional quick hit solution as opposed to a relationship, you’re going to get back what you put into it. Right. And so when you look at AI as purely a tool, then yes, you’re going to want to, right.
That’s exactly what I think you’re saying. If you look at it as a tool, you’re going to want just a hack. Whereas if you look at it relationally, as a relationship you want building, you can unlock anything like you just said, right? My AI is like my Pepper Potts.
Like I’m Iron Man and my AI is my Pepper Potts. It’s incredible. Yeah.
Which is funny because hey, you know, Iron Man has Jarvis. So I find it funny that you jumped straight to that. Yeah.
(29:05 - 30:08)
Well, this has been so lovely. And for everyone listening, I’m so excited to have Ben speaking at our summit. And I really appreciate him because not only is he so knowledgeable in the local hosting space and many other spaces, but he knows how to break down the information for people like me to understand.
I’m very AI savvy, but I’m not necessarily tech or engineering savvy. Right. And so it’s very intimidating topic for me, but high on my desire list.
Like when I look at where the future is going, I don’t see myself not having a self-hosted solution as part of it. Yeah. I think the future is going that direction where pretty much all of us are going to be able to have a self-hosted solution at some point.
And I feel like it’s going to be really important, you know, what self-hosted even means is going to change. So let’s hope. Let’s hope.
Thank you, Shelby. Of course. So February 16th, the link will be in the body of the article that comes with this podcast, and we look forward to seeing you all there.
(30:08 - 30:09)
Thank you.