• 0 Posts
  • 215 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle




  • It’s right-wing propaganda in general.

    Who’s the hero? One of the richest men in the city.

    Who are his allies? The cops.

    Who are the enemies?

    • An immigrant who wears a mask
    • A psychiatrist
    • A psychologist
    • A female environmentalist
    • A male environmentalist

    What happens when his enemies are defeated? They are sent to a mental hospital… a mental hospital that is falling apart and chronically underfunded. That leads to them escaping, giving Batman another excuse to go beat them up.

    If Mr. Wayne is a billionaire and very influential in city politics, the way they always portray him, shouldn’t he be advocating tax increases on the wealthy, and using that money to fix up the city… or at the very minimum to fix up Gotham Asylum?


  • I mean alledgedly ChatGPT passed the “bar-exam” in 2023. Which I find ridiculous considering my experiences with ChatGPT and the accuracy and usefulness I get out of it which isn’t that great at all

    Exactly. If it passed the bar exam it’s because the correct solutions to the bar exam were in the training data.

    The other side can immediately tell that somebody has made an imitation without understanding the concept.

    No, they can’t. Just like people today think ChatGPT is intelligent despite it just being a fancy autocomplete. When it gets something obviously wrong they say those are “hallucinations”, but they don’t say they’re “hallucinations” when it happens to get things right, even though the process that produced those answers is identical. It’s just generating tokens that have a high likelihood of being the next word.

    People are also fooled by parrots all the time. That doesn’t mean a parrot understands what it’s saying, it just means that people are prone to believe something is intelligent even if there’s nothing there.

    ChatGPT refuses to tell illegal things, NSFW things, also medical advice and a bunch of other things

    Sure, in theory. In practice people keep getting a way around those blocks. The reason it’s so easy to bypass them is that ChatGPT has no understanding of anything. That means it can’t be taught concepts, it has to be taught specific rules, and people can always find a loophole to exploit. Yes, after spending hundreds of millions of dollars on contractors in low-wage countries they think they’re getting better at blocking those off, but people keep finding new ways of exploiting a vulnerability.


  • Yeah, that’s basically the idea I was expressing.

    Except, the original idea is about “Understanding Chinese”, which is a bit vague. You could argue that right now the best translation programs “understand chinese”, at least enough to translate between Chinese and English. That is, they understand the rules of Chinese when it comes to subjects, verbs, objects, adverbs, adjectives, etc.

    The question is now whether they understand the concepts they’re translating.

    Like, imagine the Chinese government wanted to modify the program so that it was forbidden to talk about subjects that the Chinese government considered off-limits. I don’t think any current LLM could do that, because doing that requires understanding concepts. Sure, you could ban key words, but as attempts at Chinese censorship have shown over the years, people work around word bans all the time.

    That doesn’t mean that some future system won’t be able to understand concepts. It may have an LLM grafted onto it as a way to communicate with people. But, the LLM isn’t the part of the system that thinks about concepts. It’s the part of the system that generates plausible language. The concept-thinking part would be the part that did some prompt-engineering for the LLM so that the text the LLM generated matched the ideas it was trying to express.


  • The “learning” in a LLM is statistical information on sequences of words. There’s no learning of concepts or generalization.

    And what do you think language and words are for? To transport information.

    Yes, and humans used words for that and wrote it all down. Then a LLM came along, was force-fed all those words, and was able to imitate that by using big enough data sets. It’s like a parrot imitating the sound of someone’s voice. It can do it convincingly, but it has no concept of the content it’s using.

    How do you learn as a human when not from words?

    The words are merely the context for the learning for a human. If someone says “Don’t touch the stove, it’s hot” the important context is the stove, the pain of touching it, etc. If you feed an LLM 1000 scenarios involving the phrase “Don’t touch the stove, it’s hot”, it may be able to create unique dialogues containing those words, but it doesn’t actually understand pain or heat.

    We record knowledge in books, can talk about abstract concepts

    Yes, and those books are only useful for someone who has a lifetime of experience to be able to understand the concepts in the books. An LLM has no context, it can merely generate plausible books.

    Think of it this way. Say there’s a culture where instead of the written word, people wrote down history by weaving fabrics. When there was a death they’d make a certain pattern, when there was a war they’d use another pattern. A new birth would be shown with yet another pattern. A good harvest is yet another one, and so-on.

    Thousands of rugs from that culture are shipped to some guy in Europe, and he spends years studying them. He sees that pattern X often follows pattern Y, and that pattern Z only ever seems to appear following patterns R, S and T. After a while, he makes a fabric, and it’s shipped back to the people who originally made the weaves. They read a story of a great battle followed by lots of deaths, but surprisingly there followed great new births and years of great harvests. They figure that this stranger must understand how their system of recording events works. In reality, all it was was an imitation of the art he saw with no understanding of the meaning at all.

    That’s what’s happening with LLMs, but some people are dumb enough to believe there’s intention hidden in there.


  • That is to force it to form models about concepts.

    It can’t make models about concepts. It can only make models about what words tend to follow other words. It has no understanding of the underlying concepts.

    You can see that by asking them to apply their knowledge to something they haven’t seen before

    That can’t happen because they don’t have knowledge, they only have sequences of words.

    For example a cat is closer related to a dog than to a tractor.

    The only way ML models “understand” that is in terms of words or pixels. When they’re generating text related to cats, the words they’re generating are closer to the words related to dogs than the words related to tractors. When dealing with images, it’s the same basic idea. But, there’s no understanding there. They don’t get that cats and dogs are related.

    This is fundamentally different from how human minds work, where a baby learns that cats and dogs are similar before ever having a name for either of them.


  • Yeah. This is related to supernatural beliefs. If the grass moves it might just be a gust of wind, or it might be a snake. Even if snakes are rare, it’s better to be safe than sorry. But, that eventually leads to assuming that the drought is the result of an angry god, and not just some random natural phenomenon.

    So, brains are hard-wired to look for causes, even inventing supernatural causes, because it helps avoid snakes.


  • The construction workers also don’t have a “desire” (so to speak) to connect the cities. It’s just that their boss told them to do so.

    But, the construction workers aren’t the ones who designed the road. They’re just building some small part of it. In the LLM case that might be like an editor who is supposed to go over the text to verify the punctuation is correct, but nothing else. But, the LLM is the author of the entire text. So, it’s not like a construction worker building some tiny section of a road, it’s like the civil engineer who designed the entire highway.

    Somehow making them want to predict the next token makes them learn a bit of maths and concepts about the world

    No, it doesn’t. They learn nothing. They’re simply able to generate text that looks like the text generated by people who do know math. They certainly don’t know any concepts. You can see that by how badly they fail when you ask them to do simple calculations. They quickly start generating text that looks like it contains fundamental mistakes, because they’re not actually doing math or anything, they’re just generating plausible next words.

    The “intelligence”, the ability to anwer questions and do something alike “reasoning” emerges in the process.

    No, there’s no intelligence, no reasoning. The can fool humans into thinking there’s intelligence there, but that’s like a scarecrow convincing a crow that there’s a human or human-like creature out in the field.

    But we as humans might be machines, too

    We are meat machines, but we’re meat machines that evolved to reproduce. That means a need / desire to get food, shelter, and eventually mate. Those drives hook up to the brain to enable long and short term planning to achieve those goals. We don’t generate language its own sake, but instead in pursuit of a goal. An LLM doesn’t have that. It merely generates plausible words. There’s no underlying drive. It’s more a scarecrow than a human.


  • The reward function for an LLM is about generating a next word that is reasonable. It’s like a road-building robot that’s rewarded for each millimeter of road built, but has no intention to connect cities or anything. It doesn’t understand what cities are. It doesn’t even understand what a road is. It just knows how to incrementally add another millimeter of gravel and asphalt that an outside observer would call a road.

    If it happens to connect cities it’s because a lot of the roads it was trained on connect cities. But, if its training data also happens to contain a NASCAR oval, it might end up building a NASCAR oval instead of a road between cities.


  • Also, actual brains arise from desires / needs. Brains got bigger to accommodate planning and predicting.

    When a human generates text, the fundamental reason for doing so is to fulfill some desire or need. When an LLM generates text it’s because the program says to generate the next word, then the next, then the next, based on a certain probability of words appearing in a certain order.

    If an LLM writes text that appears to be helpful, it’s not doing it out of a desire to be helpful. It’s doing it because it’s been trained on tons of text in which someone was being helpful, and it’s mindlessly mimicking that behaviour.


  • Also, some of what happens in the brain is just storytelling. Like, when the doctor hits your patellar tendon, just under your knee, with a reflex hammer. Your knee jerks, but the signals telling it to do that don’t even make it to the brain. Instead the signal gets to your spinal cord and it “instructs” your knee muscles.

    But, they’ve studied similar things and have found out that in many cases where the brain isn’t involved in making a decision, the brain does make up a story that explains why you did something, to make it seem like it was a decision, not merely a reaction to stimulus.



  • It was Internet Explorer. But, what was probably confusing about it was that anything that required Internet access would start up the program that dialed the modem and connected to the Internet. So, clicking on the icon would eventually launch the browser, but first it would launch the dial-up program, which would take about 30s to connect.

    As an aside, it really grates to see how Microsoft called their browser “The Internet”. And that’s the least dastardly thing they did that let them use their monopoly on operating systems to destroy Netscape.



  • My favourite story about aircraft design about some of the design mistakes on the F-16 fighter.

    The F-16 was the first fly-by-wire fighter. They didn’t have much experience with it, and tried out some new things. One was that instead of having a stick between the legs of the pilot they used a side stick. And, since everything was fly-by-wire they didn’t need the stick to mechanically move. They decided they’d just use a solid stick with pressure transducers, since it was simpler and more reliable than a stick that moved.

    The trouble was that the pilots couldn’t estimate how much pressure they were using. This led to the pilots over-rotating on take-off (pulling back too hard). Even funnier was that at early airshows, when the pilots were doing a high-speed roll, you could see the control surfaces twitching with the heartbeat of the pilots as they shoved the stick as hard as they could to get maximum roll.

    That led to them adding a small amount of give to the stick, essentially giving the pilots feedback on how hard they were pushing the control surfaces.

    Another more subtle issue with the design was that originally the stick was set up for forward, back, left and right aligned with the axes of the plane itself. But, they discovered that when pilots pulled back on the stick, they were pulling slightly towards themselves, causing the plane to also roll. So, they realigned it so that “pulling back” is slightly pulling towards the pilot’s body, rather than directly along the forward / backward axis of the plane.


  • There was a listener question on a science podcast recently that asked about how the temperature changed on the moon during the recent solar eclipse.

    They almost got what a solar eclipse was, but not quite. During a solar eclipse, the moon gets between the sun and the earth, blocking the light getting to the earth and casting a shadow on the earth. The side of the moon facing the earth is completely dark because the thing that normally lights it up (the sun) is completely behind it. But, the back side of the moon is getting full sun and just as hot as normal.

    I think part of the problem with understanding all this is that the sun is just so insanely bright. Like, it’s a bit hard to believe that the full moon is so bright just because it’s reflecting sunlight. It’s also amazing that the “wandering stars” (planets) look like stars when they’re just blobs of rocks or gases that are reflecting the insanely bright light of the sun.

    It’s amazing if you think about it. Light comes out of the sun in every possible direction. A tiny fraction of it hits the surface of Mercury, and only some of that light is reflected back out. The light reflected from Mercury goes in almost every direction. A tiny fraction of it hits the earth. But, even with that indirect bounce, it’s bright enough to see with the naked eye.