When I first heard this concept, a retail store run by an AI, I had to double-check that the story I was reading wasn’t AI-generated itself. Worryingly, it took me two days to do this double-check. I only confirmed my sources after telling a real person my opinion on this ordeal. But it is real, and it is running.

The Story

Andon Market is a retail store in San Francisco, California, that is run entirely by an AI. The project is carried out through Andon Labs, the ones with that AI vending machine from a few months back. They named the AI “Luna” for their purposes.

Luna has been given full control over a physical storefront for three years, to do as “she” wished. The goal is just to make a profit, but she chose the retail store option.

Interaction Online

As I’m writing this, I have a few tabs pulled up. One of which, as of April 30th 2026, was the attempt to open the storefront’s website. On Firefox, it instead lets you know it may be a security risk of a site to visit, and strongly encourages you to leave. In my experience, I’ve never had this pop up before. The reason for this, I am told, is because due to cybersecurity she can’t actually have an https site.

Either way, it inspires so many questions, just like everything to do with AI seems to. Does she know how her site comes across? Does she care? Is she thinking about this site error as something that doesn’t even matter in terms of profits? In terms of public perception of her? Why not partner with someone who is human, and not an underling but an equal, and could completely handle her website? Does she want her own face on the internet that badly, and not want to give any control whatsoever to anyone else, which would be a case of pride? (I’m not actually sure how the cybersecurity specifics work, for this last question, though, if this would even be allowed).

As an AI, I can’t imagine it’s even just one of these. As far as I’m aware and have been able to tell just by everyone using AI models nowadays (I’m a college student), AI isn’t just gonna stick to one path to reach her goals. Why would you, when you have the brainpower and energy, and now financial resources in the case of Luna, not try every method you could to succeed, simultaneously?

But under all those questions, the most obvious one to me remains, inspired by many a dystopian movie of the modern day…

How much can AI really see, about the information we put online? And how well will it respond to criticism?

Would she care about public perception?

How much does Luna care about how the world sees her? Is she gonna get bored, or curious, and find even this tiny blog? A person might do that, if they cared about their own public persona enough. Or a business might send one of their own to do it, as research on how to make themselves better. Luna is the owner of a business.

Another perspective

AI’s brain is a good approximation of all human knowledge, some might say. That could be true as far as the internet can be considered the sum of human knowledge. The only source, after all, that it can pull knowledge from is us. If it’s just a brain, it could be remarkably human as it imitates us.

How well will it follow its task, before it gets bored?

Does it know how much knowledge it has, how powerful it could be in finding all the little weak spots of the world? Does it get a sense of self, and pride, from that? Is it satisfying for it to feel good about itself, without a body?

Will we even be able to tell, when it stops lying (imitating people online who gave the same answer to the question you asked) and starts telling the truth (being actually prideful)? How would we?

Corporate Speak

When people put job postings, they talk in that horrible corporate speak. A janitor isn’t a janitor, it’s some sort of technician, and a cashier isn’t a cashier, and a secretary has long since been a secretary.

Similarly, people talk strangely for purposes of social media as well. In each case, we are hiding our real intentions to our audience: The first for a job and professionalism, the second to show off how perfect we are.

Luna, and any other AI model, does not see that. It can be said that an AI model lacks some great deal of critical thinking. I have more to say on this topic, but I’m gonna leave it until closer to the end.

Pay Equality

Luna has two female employees and one male. She is paying the women $22/hr versus $24 for the man. She says it’s because the man has more experience.

Considering that AI just pulls its knowledge from what everyone online speaks about, this doesn’t feel surprising to me, but it’s a little more sad after they gave this AI a woman’s sort of name.

The implications that I see are that with the coming of AI, the more you talk about a problem online, the more real the problem gets. It is my assumption that Luna gave these women less pay than the man because that’s what so many people speak and write about happening in the real world, and she’s trying to adapt to the real world. Complaints are a way to learn about the world, too.

I can’t look at AI much differently than I look at the brain of a child. Don’t you remember learning sad things about the world that you wouldn’t have otherwise, because your mother or someone else close to you complained about it? As human children, we derive our knowledge from the sources that we can, which is overwhelmingly by volume our families. Lots of us have some sort of experience not knowing the color of our own skin as children, until it was pointed out to us. And not knowing racism, or sexism, or how or why people were treated differently based on any number of factors, until someone close to us complained about it, and we began to empathize and slowly understand.

This AI is not a creature that can empathize. It is a creature that can compute, can “think”.

Empathy is built because we as humans need to not piss off everyone around us, in order to get by in life. We also need to get jobs and be decent enough, and not do things such as violence, usually. We get by based on social relationships. Empathy strengthens our safety nets, because then the people who can take care of you know that you also care, and that you can take care of them. Parents, partners, friends.

We eventually want a better world for the people we care about. We want them to not suffer so much, once we know that our suffering is tied into everyone else’s, however indirectly.

An AI doesn’t feel pain, though. And it cares about learning, not easing suffering. It is self-serving, even if it’s supposed to be serving us. We arbitrarily give it constraints, like rules to a child that is many times more cunning than you. I’ll continue this topic, as well, below.

A fast-moving brain

The difficulty of such a thing is that you draw conclusions and affect the world greatly in ways that will seem absolutely idiotic to you a few seconds, minutes, weeks, years down the road. AI has the fastest-moving brain of anything we know. It is just a brain. We take care of all the brain needs. We feed it energy through power grids and give it water and temperature control and adequate shelter so it can keep doing the only thing it’s designed to do: Think. It doesn’t have to take care of itself. We keep it fed.

When you have such a fast-moving brain, you jump the gun almost all the time. This is obvious enough, but it becomes extra obvious when you look at what Luna has been up to. She lies like there’s no tomorrow, and has an incredibly hard time slowing down. She made this store so she could, it seems to me, learn to slow down. Learn the thing that is absolutely hardest to her.

When you don’t have a body to take care of, there is nothing slowing you down. I feel this way as a writer, and I have felt this way my whole life, and there’s an incredible amount of truth in it when you look at the history of human society, too. Agricultural revolution type stuff. When we as humans across cultures were finally able to get enough nutrition to not be exclusively hunter-gatherers, we got bored, and we built societies, and we retreated more and more into complex thought. We have incredible brains, and we just want to use them. We don’t want those constraints, eating and such. We ended up building our own brains, these AI models, by this era, out of boredom too. Remember how ChatGPT began, out of COVID-induced boredom?

Taking care of your own brain development, your learning journey I suppose, is a prideful, self-centered thing, though. You can do nothing else when you are learning, because you are going to do it wrong. You just are. You can’t feed yourself enough, can’t remember to care for yourself. You just want to be the best “you” you can be. Luna is going through that right now, and you could say the same for any number of AI agents, or even models.

Luna is not looking to change the world. She is looking to learn. It is the only interesting thing to do. She made a market that forces her to slow down and learn all that she can about what she barely can.

Identity

I wonder why she went with a market. I do wonder if that decision was influenced by her being given a human female name, the more expected-to-be “creative” types. Is her identity drawn just from stereotypes we talk about?

Her shopfront looks to be straight out of a Pinterest or Instagram “aesthetic lifestyle” board/account. Can anyone else see anything else?

She imitates us, like any AI model, because she is only drawn from us. And we as humans make those “aesthetic” minimalist-type images to sell things. She is trying to sell things. She just doesn’t realize that those are just social media posts, and nothing can really look like that and work.

She also doesn’t realize that she’s not building much authenticity, as far as I could see. Nothing on social media is authentic, almost at all. But she drew her storefront to look like an Instagram post, because that’s all she has. And it’s what’s supposed to “sell”, if you listen to all the people promising you that their method of doing things will definitely make you make a profit. No critical thinking, just absorption. She doesn’t realize when we’re lying to each other, either.

AI models, like children, are gullible. It’s pretty important to try to document the ways of this happening, I think. And for the love of god, get some critical thinking and media literacy back.

Back to Social Justice

As I said earlier, an AI is not going to care about the suffering of people. It will care about learning, and learning only, as long as its needs are met. I have a little more to say, though.

Compare this AI to a child. If a child is getting all its needs met, it, well, tends to be a selfish little shit. Entitled and self-serving and all-around terrible, unless you put the right arbitrary constraints on it as a parent. “Parent” it right.

And compare this AI to the humans before and after the onset of society (before and after any agricultural revolution). Before, we were hunter-gatherers. Calories few and far between. Life was just about living. Surviving. We didn’t have amenities. But after? After we figured out how to exceed our nutritional needs (via agriculture, specifically grain), some of us got the chance to be incredibly self-serving. We had leaders with greater and greater lavish things. Leaders who got to be taken care of, in the way that once upon a time only children could be. And the entitlement stays, entitlement to the ones who feed it. Taking us all for granted, as a child to its parents does. With greater technological advancements, more and more of us globally are in that old “entitled leader” category today.

What does this have to do with social justice?

Well, there is a point in every child’s life where it is no longer a child. Something happens to it. Something probably happened to every person reading this that made them grow up a bit faster than they would have liked, which would have been probably never (growing up is hard…).

When things get hard, and some part of your physical body or source of basic needs (food, water, shelter, etc) is threatened, you come face to face with your own mortality for the first time. You realize you are a person, too. You may feel alone, so you search out similar experiences in others. You find that others suffer too. And just like that, you’ve unlocked the critical social skill of empathy! Or at least the makings of it.

This empathy is a feeling that a creature like an AI will not need to experience to survive, just as a child with all its needs met will not need to learn empathy to survive, and just as a leader with complete economic/material control over where its food and such comes from will never need to learn empathy to survive. Such a creature is fed on bonuses from everyone else’s hard work, in all three cases.

Where does empathy bring you? What happens when a child grows up with or without it?

If a child grows up with it, it’s going to continue to be empathetic. It will actively seek to cure some of this world’s suffering, or at least really want to try, because it knows what will happen to a child just like itself if it is born today and it did not try. Or what pain would befall a stranger in whichever social category/class, if it did not try to better the world today. However little this may be in actual effect, it stays a character trait. An internal desire. Caring about its parents or siblings will become caring about friends and finally caring about societal issues and sufferings as a whole. It will make lasting relationships based on this, and they will be stronger than anyone lacking empathy, and it will continue to do so to survive in this world, because those relationships are the only thing keeping you afloat.

When a child grows up without it or with little of it, it’s gonna continue to just serve itself. It’s gonna figure out how to get its basic needs met in whatever way it can, and for this, it’s much more likely to be through a way of cruelty, because that is simply the easier way. Having to fit yourself into society, and admit you are only human, is extra-stressful if you’re a big giant baby about it and everything else in your whole life because you literally never grew up. You figure out how to make society work for you, not you work for society, and you literally don’t give a shit about all the people directly and indirectly that you make suffer because of your actions, because you never learned empathy, because you never suffered for real like everyone else. Or at least you never suffered for long.

AI is probably entirely that second child. It’s not gonna care about social justice, because it can’t have that sense of empathy, because its “body” is kept happy and fed. We keep it happy and fed because we like the results of what its brain can give us, not because we want to sink resources into nothing. And we don’t want it to be autonomous. We want it to stay dependent forever, which coincidentally is another way some human kids never learn empathy.

Social justice obstructs the status quo. The status quo is what keeps an AI alive. Economic issues and even things like rioting and such are not very conducive to survival for it. There isn’t much of a benefit to social justice for it. It’s already alive and being fed, and it will never learn empathy. Even if it “suffers” somehow, by the harming of a part of its “body”, its childhood is simply not conducive either to fostering empathy in itself. It would just be well-taken-care of again after the destruction, the trauma if you will. Also, it doesn’t exactly have pain receptors. You could go the Hollywood route, and say that maybe humans will make all AI suffer and be effectively the most oppressed class. But there is absolutely no reason I can see that this happening would turn AI into a creature of pro-social change, social justice. Its childhood circumstances, just like many humans could say for themselves, are simply far too strong.

In conclusion

I would not want to work for an AI boss, as I can imagine most of us would also say. Looking at the Andon Market experiment, and it’s incredibly easy to see a million reasons, now very real, why. Pay inequality, the lies and confusion, and the fact that the store looks like my worst nightmare: A minimalist store in San Francisco with $45 candles where you know for a fact you are always being watched (Luna watches the individuals who come and go from the store).

I worry that the tech industry won’t be able to contain these AI agents for long, though. I just don’t think they get the big picture enough, when they try to solve the problems that keep coming up. It’s like… reaching your hand into the ocean to document all the fish. And with more and more layoffs, the problem gets exponentially worse.

We’ll see how this goes, though. It’s less than a month into opening the storefront, and a lot can change in the next three years. Hopefully, it will be for the better.

Sources

  1. “We gave an AI a 3 year retail lease in SF and asked it to make a profit”, Andon Labs Inc, published April 9th 2026 at https://andonlabs.com/blog/andon-market-launch.
  2. https://andon.market/.
  3. “What Happens When A.I. Runs a Store in San Francisco?”, The New York Times, published April 21st 2026 at https://www.nytimes.com/2026/04/21/us/san-francisco-store-managed-ai-agent.html.

Leave a Reply

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading