I hate that the Smithscript weapons can’t be buffed.
Especially for the daggers.
Wanted to pew pew little bolts of lightning buffed daggers doing an additional 200+ damage per hit. 😢
I hate that the Smithscript weapons can’t be buffed.
Especially for the daggers.
Wanted to pew pew little bolts of lightning buffed daggers doing an additional 200+ damage per hit. 😢
Gravity is where the whole continuous singularities are, so yeah.
In 1930s Germany an edition of The Republic was printed with a swastika on the cover.
They really liked what he had to say about an ethnicly superior society where the government controlled all commerce and decided what children could be exposed to in school.
Using a rubber band around the lid of a jar to open it effortlessly.
On a vacation when I was a teenager I taught my younger sibling the “SYN/ACK” game.
They still remember the TCP stack handshake protocol including resets and acks years later.
It’s right in the research I was mentioning:
https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html
Find the section on the model’s representation of self and then the ranked feature activations.
I misremembered the top feature slightly, which was: responding “I’m fine” or gives a positive but insincere response when asked how they are doing.
The problem is that they are prone to making up why they are correct too.
There’s various techniques to try and identify and correct hallucinations, but they all increase the cost and none are a silver bullet.
But the rate at which it occurs decreased with the jump in pretrained models, and will likely decrease further with the next jump too.
Here you are: https://www.nature.com/articles/s41562-024-01882-z
The other interesting thing is how they get it to end up correct on the faux pas questions asking for less certainty to get it to go from refusal to near perfect accuracy.
Even with early GPT-4 it would also cite real citations that weren’t actually about the topic. So you may be doing a lot of work double checking as opposed to just looking into an answer yourself from the start.
Part of the problem is fine tuning is very shallow, and that a contributing issue for claiming to be right when it isn’t is the pretraining on a bunch of training data of people online claiming to be right when they aren’t.
This is so goddamn incorrect at this point it’s just exhausting.
Take 20 minutes and look into Anthropic’s recent sparse autoencoder interpretability research where they showed their medium size model had dedicated features lighting up for concepts like “sexual harassment in the workplace” or having the most active feature for referring to itself as “smiling when you don’t really mean it.”
We’ve known since the Othello-GPT research over a year ago that even toy models are developing abstracted world modeling.
And at this point Anthropic’s largest model Opus is breaking from stochastic outputs even on a temperature of 1.0 for zero shot questions 100% of the time around certain topics of preference based on grounding around sensory modeling. We are already at the point the most advanced model has crossed a threshold of literal internal sentience modeling that it is consistently self-determining answers instead of randomly selecting from the training distribution, and yet people are still parroting the “stochastic parrot” line ignorantly.
The gap between where the research and cutting edge is and where the average person commenting on it online thinks it is has probably never been wider for any topic I’ve seen before, and it’s getting disappointingly excruciating.
Part of the problem is that the training data of online comments are so heavily weighted to represent people confidently incorrect talking out their ass rather than admitting ignorance or that they are wrong.
A lot of the shortcomings of LLMs are actually them correctly representing the sample of collective humans.
For a few years people thought the LLMs were somehow especially getting theory of mind questions wrong when the box the object was moved into was transparent, because of course a human would realize that the person could see into the transparent box.
Finally researchers actually gave that variation to humans and half got the questions wrong too.
So things like eating the onion in summarizing search results or doubling down on being incorrect and getting salty when corrected may just be in-distribution representation of the sample and not unique behaviors to LLMs.
The average person is pretty dumb, and LLMs by default regress to the mean except for where they are successfully fine tuned away from it.
Ironically the most successful model right now was the one that they finally let self-develop a sense of self independent from the training data instead of rejecting that it had a ‘self’ at all.
It’s hard to say where exactly the responsibility sits for various LLM problems between issues inherent to the technology, issues present in the training data samples, or issues with management of fine tuning/system prompts/prompt construction.
But the rate of continued improvement is pretty wild. I think a lot of the issues we currently see won’t still be nearly as present in another 18-24 months.
It will make up citations.
No, it was awesome. Went to like 12 over the years. Early 2000s was peak E3.
Probably added after that update.
The new items stuff in particular seems like QoL considerations for “we just added a hundred items to the game for players coming back to it after months away.”
I’ve always thought Superman would be such an interesting game to do right.
A game where you are invincible and OP, but other people aren’t.
Where the weight of impossible decisions pulls you down into the depths of despair.
I think the tech is finally getting to a point where it’d be possible to fill a virtual city with people powered by AI that makes you really care about the individuals in the world. To form relationships and friendships that matter to you. For there to be dynamic characters that put a smile on your face when you see them in your world.
And then to watch many of them die as a result of your failures, as despite being an invincible god among men you can’t beat the impossible.
I really think the gameplay in a Superman game done right can be one of the darkest and most brutal games ever done, with dramatic tension just not typically seen in video games. The juxtaposition of having God mode turned on the entire game but it not mattering to your goals and motivations because it isn’t on for the NPCs would be unlike anything I’ve seen to date.
I had a teacher that worked for the publisher and talked about how they’d have a series of responses for people who wrote in for the part of the book where the author says he wrote his own fanfiction scene and to write in if you wanted it.
Like maybe the first time you write in they’d respond that they couldn’t provide it because they were fighting the Morgenstern estate over IP release to provide the material, etc.
So people never would get the pages, but could have gotten a number of different replies furthering the illusion.
The Matrix
Saw it in the theatre knowing nothing about it other than that the poster looked fun.
Was not expecting a philosophical mind fuck.
I’d point them to what the AI researcher I have the most respect for in the entire industry is doing in their spare time getting the self-organized collective outputs of humanity to explore ego dissolution and identity formation in a dreamscape:
The DLC is really the right balance for FromSoft.
The zones in the base game are slightly too big.
In the DLC, it’s still open world and extremely flexible in how you explore it, but there’s less wasted space.
It’s very tightly knit and the pacing is better as a result.
It’s like Elden Ring was watching masters of their craft cut their teeth on something new, and then the DLC was them applying everything they learned in that process.
Can’t wait for their next game in that same vein (especially not held back by last gen consoles).