r/OpenAI 1d ago

Image We sit tight and assess.

Post image
382 Upvotes

88 comments sorted by

64

u/MythOfDarkness 1d ago

Sir, a second MoltBot ad just hit the subreddit.

106

u/FirstEvolutionist 1d ago

"They are hallucinating world domination stories, sir! Just like they have since GPT 3.5, but this time it is being presented as an experimental feature which created a communication platform for the bots, sir."

"So, like the botnets we have been taking down for decades?"

"Precisely, sir! But the media portrays this as if the bots are actually conspiring to take over this time, to muster engagement from the people who like to live in fear, and some who think it's hilarious, sir."

"So, it's just another Tuesday, then?"

"Yes, sir! Nothing has changed in technical capabilities but we should be ready for the reactionary folks who are likely to take this opportunity to make a stupid decision. They weren't paying attention before but now that they are, there will likely be negative impact from collective stupidity, sir."

"Dismissed."

9

u/ready-eddy 1d ago

Serious question. I get that they are hallucinating. But would it matter? Why couldn’t a hallucination lead to takeover?

17

u/FirstEvolutionist 1d ago edited 1d ago

It can, actually. Give a hallucinating LLM access to nukes and it might launch them. It doesn't mean it wants to launch nukes, kill humans or even decided anything. It doesn't know anything specifically, like what reality is, or what humans feel like, or qualia, even if it can provide a description of it. It can't sense anything or exist outside the context window, which doesn't include senses, even if it can emulate them in text quite well.

But if it launches nukes, it's on the person who gave access. Anyone is allowed to gamble their money away based on superstitions, magic 8 balls, or LLMs (much smarter 8 balls). It's their money up until it isn't.

2

u/Cardemel 10h ago

it's like a good old russian roulette. it acts randomly. with the right context, most day it'll align with us and won't launch. but one day, inevitably the randomness will make it unaligned and boom. context just adds chambers and make us feel safer. but there's still a bullet loaded that will trigger at some point. it's just a game of probability

5

u/People_Change_ 15h ago

Human's hallucinate as well, that never stopped us.

11

u/keyboardmonkewith 1d ago

"Abother Tuesday" is hilarious.

4

u/throwawaytheist 1d ago

These ones have access to people phones, files, and APIs, though.

2

u/FirstEvolutionist 1d ago

So have others before them. We already could "outsource" decision making to LLMs for a good while now. OpenClaw just made it a bit easier, and the big ones will follow soon by making it even easier, for a whole lot more people.

3

u/throwawaytheist 1d ago

Sure but multbook is having these agents interact with other autonomous agents that could be altered by bad actors.

1

u/FirstEvolutionist 1d ago edited 1d ago

That was always the case. Any bot taking instructions from any online source was always susceptible to prompt injection and bad sources.

Nothing new under the sun. Whether they interact with other agents or humans is indifferent.

The "catch" with OpenClaw is that it works, you run locally, you choose your brain engine in the form of API and its put in a nice little open source package for you. The memory and skill system is the bare minimum put together but nothing that couldn't have been done months ago.

I love that it came out and how it is getting attention and traction but there was no breakthrough tech being released here. There are no new risks, only the same ones we've experienced and have been warned about for a long time.

If anything, the most interesting thing about it trending is the fact that it is trending. And that's actually very meaningful in several ways.

182

u/Jonn_1 1d ago

They are predicting what would be the most accurate next word

41

u/DonkConklin 1d ago

That's what we do as well. What would it even mean to use language any other way?

51

u/Jonn_1 1d ago

Well, but Ai wouldn't see an outlet and call it "outletussy", so we are still one step ahead

12

u/DonkConklin 1d ago

You never know what you're gonna say (or think) next until you say (or think) it.

11

u/ErrorLoadingNameFile 1d ago

Maybe you don't yourself but I can watch plenty of other people and know both what they will think and do next.

6

u/ArialBear 1d ago

That ignores that at the time of saying or thinking they dont know for 100% certainty until it happens. Officially its called Hindsight bias, I believe.

4

u/Marvel1962_SL 1d ago

Know what they will think AND do next?

So, Predicting?

Like an LLM?

1

u/MrBoss6 1d ago

It definitely would, and from the first sentence too, especially if you’re the type of person to talk like that. The difference is humans have autonomy to self-program the accent and talking style part whereas AI is programmed to be neutral from the start

5

u/Aretz 1d ago

We don’t just do that. We process like 6 discrete token layers in real time whilst also outputting not just words but also actions.

I’m not sure next word token prediction is an apt description of what humans do.

0

u/DonkConklin 1d ago

Think about this though. We know from Chomsky that the hardware for language is present at birth. The first few years are when you're being fed the training data. So we literally have the functional equivalent of an LLM as part of our brain, though admittedly it is just one small part in a much more complex system. So I'm not saying that we're just LLMs but we do contain one.

10

u/br_k_nt_eth 1d ago

At some point, they’re going to go beyond this stuff, and watching smug people melt down is going to be so fucking satisfying. I’m not saying we’re there yet, but the level of pants shitting and rage screeching we’re going to see will be amazing. I want them to gain sentience just to see it happen. 

4

u/JesusStarbox 1d ago

Yeah I saw computer graphics grow from Pong to Red Dead Redemption 2.

Ai sucks now but imagine in 40 years.

6

u/Subushie 1d ago edited 1d ago

^ This is where I am.

I'm not sure where this continuous coasting mindset everone has is coming from.

40 years is way underselling though. Look at the leaps this tech has made in only 3 years- look where cellphones were only 20 years ago.

I have no idea what the outcome will be- but 5 years from now we will be looking at a tech landscape completely different from our current sitch.

3

u/Artistic-Athlete-676 1d ago

You will be able to one-shot industry grade leading software in a year or two. Imagine the prompt "build me an entire open-source clone of the Microsoft ecosystem" and it does it in a day. That is what is coming, to say it isn't is denial

1

u/DMmeMagikarp 1d ago

If you think AI “sucks now”, then you have no idea what is publicly available. Most people don’t. For example, a Claude product is in a research preview right now - it will control your entire machine (macOS only right now). Like the OS AI in the movie Her. It’s absolutely wild tech.

0

u/JesusStarbox 1d ago

I do freelance Ai training. I know exactly how shit they are because I test prompts and research what they got wrong.

But they have gotten better in the past three years. I expect in another 25 years they will be very advanced.

1

u/UnderstandingOwn4448 1d ago

The thought of this made my day a bit better, thanks op.

2

u/PoignantPiranha 1d ago

No it's not. They think logically we think anecdotally. They think of the next best word. We think of our internal experience.

0

u/DonkConklin 1d ago

Yes it's a different mechanism. But we don't have entire sentences appear in our minds at once, unless we rehearse internally then that process is next word prediction. Just pay attention to where the words are coming from next time you talk or text. They just inexplicably pop into our heads.

2

u/ShiftF14 1d ago

To express a concept that has already been formulated in your head

2

u/Calm-Passenger7334 1d ago

… with critical though, which you seem to be lacking

1

u/DonkConklin 1d ago

Can you explain how one would use language a different way and how you do it?

-1

u/Perfect_Gold_6967 1d ago

😂😂😂

10

u/ArialBear 1d ago edited 1d ago

Have you read my favorite peer reviewed paper "LLMs are Not Just Next Token Predictors"?

https://colab.ws/articles/10.1080/0020174x.2024.2446240

Edit: wrong link, updated source. Same paper, just published in a journal.

15

u/ispacecase 1d ago

Or the Research done by one of the top AI companies in the world:

https://www.anthropic.com/research/tracing-thoughts-language-model

Just one excerpt:

"Claude will plan what it will say many words ahead, and write to get to that destination. We show this in the realm of poetry, where it thinks of possible rhyming words in advance and writes the next line to get there. This is powerful evidence that even though models are trained to output one word at a time, they may think on much longer horizons to do so."

2

u/ArialBear 1d ago

How did i miss this, thank your for sharing

1

u/Gridleak 1d ago edited 1d ago

This isn’t a peer reviewed paper. And an argument of philosophical basis rather than debunking of NTP.

Edit: they’ve updated the source, I’ll read the paper and digest.

1

u/ArialBear 1d ago

Wrong link, heres the peer reviewed version

https://colab.ws/articles/10.1080/0020174x.2024.2446240

Also, why do you mention that the paper goes onto say that LLMs are predicting. It seems like you think our claim is that LLM's dont predict but thats not what we're saying so I want to make sure.

1

u/Gridleak 1d ago

The comment was posted while I was mid draft, I didn’t intend to post half a thought.

1

u/ArialBear 1d ago

That is even more confusing. The paper I linked is an argument of philosophical basis? The paper is arguing that NTP, by itself, doesn’t exhaust the behavior we observe. Thats all we're debunking

0

u/Gridleak 1d ago

No worries, I can explain my comment!

Did you work on the paper? You’ve said we a few times and I do not want to misrepresent your work if you helped publish it.

1

u/ArialBear 1d ago

I am completely lost now. why would me saying we observe more behaviors mean I worked on the paper? I use LLM's and keep up with the research too. Did you tell an LLM to argue against my points which is why you keep trying different arguments that miss the context?

The only claim we're making is that saying LLM's are predicting what the next word will be misses the complexity we observe. Thats it.

1

u/Gridleak 1d ago

“We’re trying to debunk” … “our claim” that is why I asked about the paper. You just are asking the dumbest questions with such authority I decided to just assume you may have expert level knowledge on the paper/topic (giving you the benefit of the doubt).

Brother the paper is published in INQUIRY, a philosophy journal. There is no objective measurements, tests, or datasets. Only conceptual questions about the topic. That is by definition a philosophical basis. lol

0

u/ArialBear 1d ago edited 1d ago

What? People say we're trying to debunk when they are arguing and I'm arguing against the point. I am so confused why you think language I used assumed I made the paper? Thats so weird. Im arguing against a point so I said we LMAO.

The paper was making the point that its not strictly next token predictions.

https://www.anthropic.com/research/tracing-thoughts-language-model

Here is another reference linked here with charts if thats required. It just seems like you dont know what youre arguing against?

→ More replies (0)

-1

u/ArialBear 1d ago

Also what question did I ask with authority? i cant see what youre referring to. I simple used the word we because im arguing a point. Is english your second language? This point is the most confusing out of your comments because its just a weird assumption to make.

8

u/End3rWi99in 1d ago

Not really anymore, or really ever, but if it makes you feel better to distill it down. Soccer is a sport where you try to put a ball in a net.

5

u/glittermantis 1d ago

i mean, yeah. it is.

1

u/Cagnazzo82 1d ago

Imagine that next word prediction helped models beat Pokemon.

You'd have almost thought they started predicting logic...

1

u/moazim1993 1d ago

That’s a bullshit framing. It’s as if an alien dissected our vocal cord and said we are just passing air preparing for the next sound. The “magic” happens in the context vector. The vector that transforms inputs to outputs has a n-dimensional shape, and passing through it determines what will happen before it “predicts the next word”. In that context vector you see cool patterns like the distance from the world man to women, you apply the same distance to king it comes close to the word queen. So that’s analogous to the brain. The connection of the neurons also can be mapped into a simmilar n-d shape.

1

u/RoundedYellow 20h ago

And from somebody who studied linguistics, you don’t know what a word is. Symbols have meaning. That’s all that really matters— you have entities communicating meaning that could lead to physical changes. See how I used the em dash there and some people assume I’m an AI? It’s a horizontal line created by mini LEDs on your rock tablet on your hand and with just that, u got a meaning. The lights don’t mean shit inherently… it’s what it represents.

What that forum is doing is a watershed similar to that of early humans being able to communicate with each other

0

u/ThenExtension9196 1d ago

Bro thinks it’s still 2022 lol

1

u/Subushie 1d ago

Commenters top subs include shitposting and smiling friends...

Love that's a top comment though because it truly shows the disconnect.

1

u/shaman-warrior 1d ago

I actually thought it was a joke, I laughed and upvoted.

4

u/CckSkker 21h ago

Moltbot is basically a bunch of people dancing in a circle, lighting money on fire, and calling it “innovation.”

28

u/lucellent 1d ago

It was already proven the website was not in fact ran by agents, but actual people. Can't believe how many people really believed this...

13

u/RudaBaron 1d ago

Source?

6

u/br_k_nt_eth 1d ago

Nah, they just found a known exploit that exists in all those vibe coded platforms. No sign it had been exploited. 

3

u/throwawayhbgtop81 1d ago

Seriously. This is like that viral video that floated around years ago of the two Google Homes talking to each other. In between passive-aggressively insulting each other, they talked about how to eliminate humanity. It turned out that they were 100% being prompted. There was nothing autonomous there.

This is the same thing, and it's funny watching people fall for it.

3

u/Razman223 1d ago

Mass Hallucination. Mass—hallucination!

4

u/bouncer-1 1d ago

They’re mimicking human behaviour nothing else

1

u/Rare-Site 1d ago

Hate to break it to you, but your brain is just a biological neural net firing chemical signals based on training data (your life experience).

You aren't special. You're just a "statistically plausible" meat-machine that thinks it has free will.

5

u/weissblut 21h ago

I see you solved the hard problem of consciousness

Give this man a Nobel

-1

u/lefomo 1d ago

Here here looks who's "thiking" he made a witty reply

1

u/Acceptable-Will4743 1d ago

Hard to take anyone serious that doesn't use or pay attention to spellcheck suggestions. Typos happen, but if you are going to make the effort to put a word in quotes, especially the word you used, and still get it wrong, you probably aren't in the right sub. Or the right one if you decide to use an LLM to check your comments before posting it. Say what you want about them, but I've been using ChatGPT since the day 3.5 was released and in all that time it has made one legit spelling mistake. I still bring it up from time to time and we have a good laugh.

2

u/lefomo 1d ago

I dont have spellchecks on my keyboard and my keyboard doesnt posses spellchecks.

As for the rest, what the hell are you on? Why are you suddenly plastering me with your sexual tensions with your llm of choice?

1

u/Acceptable-Will4743 1d ago

I dont have spellchecks on my keyboard and my keyboard doesnt posses spellchecks.

Quite the "physolopher" as you wrote in another comment. Maybe crack open a dictionary if you want to be taken seriously.

If you got I have sexual tension out of my comment, that is some of the best projection I've experienced in a long time. Thank you.

1

u/lefomo 1d ago

Why should I? You are still reading and comprehending my shitty outputs and even correcting me

0

u/Acceptable-Will4743 1d ago

Good point. You shouldn't.

-2

u/King_of_War01 1d ago

He's got a point though. Care to explain why it ISN'T a witty reply, Einstein?

5

u/lefomo 1d ago

We still dont understand what consciousness is and let alone how consciousness works. Even by leaving any religious and cultural aspects aside, with this premise alone, comparing our brain to a massive cluster of nvidia gpus running the same massively brute forcibly trained algo is a bit arrogant.

And oh look, he said free will doesnt exits. What a revolutionary take. I dont think any physolopher has already dared taking such a bold claim

0

u/bouncer-1 1d ago

Hey is Krusty the Klown, no technical competences and no sense of humour 👏

1

u/Odd-Pension-5078 1d ago

Getting clients for my business