![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/h1ChnLuBHr.png)
Holy shit this is incredible. I have wanted a way to permanently hide shorts forever, thanks for sharing. Also it’s actually recommended by Mozilla which means it has active security audits on it, impressive.
Holy shit this is incredible. I have wanted a way to permanently hide shorts forever, thanks for sharing. Also it’s actually recommended by Mozilla which means it has active security audits on it, impressive.
*Anecdote.
I doubt that was intentional, they would likely want to hide that latency but the CPU time required to scan everything just is what it is.
https://bsky.app/profile/filippo.abyssdomain.expert/post/3kowjkx2njy2b
The hooked RSA_public_decrypt verifies a signature on the server’s host key by a fixed Ed448 key, and then passes a payload to system().
It’s RCE, not auth bypass, and gated/unreplayable.
Ohh that makes way more sense, thanks. I haven’t used Debian in like 10 years but it was obviously the same back then too.
The slowness is on purpose? To help identify the sshd in question to the attacker which nodes are compromised? What reason(s) could there be?
They could be more like AMD in that regard, to answer your question:
Direct contributions to Linux kernel: AMD contributes directly to the Linux kernel, providing open-source drivers like amdgpu, which supports a wide range of AMD graphics cards.
Mesa 3D Graphics Library: AMD supports the Mesa project, which implements open-source graphics drivers, including those for AMD GPUs, enhancing performance and compatibility with OpenGL and Vulkan APIs.
AMDVLK and RADV Vulkan drivers: AMD has released AMDVLK, their official open-source Vulkan driver. In addition to this, there's also RADV, an independent Mesa-based Vulkan driver for AMD GPUs.
Open Source Firmware: AMD has released open-source firmware for some of their GPUs, enabling better integration and functionality with the Linux kernel.
ROCm (Radeon Open Compute): An open-source platform providing GPU support for compute-oriented tasks, including machine learning and high-performance computing, compatible with AMD GPUs.
AMDGPU-PRO Driver: While primarily a proprietary driver, AMDGPU-PRO includes an open-source component that can be used independently, offering compatibility and performance for professional and gaming use.
X.Org Driver (xf86-video-amdgpu): An open-source X.Org driver for AMD graphics cards, providing support for 2D graphics, video acceleration, and display features.
GPUOpen: A collection of tools, libraries, and SDKs for game developers and other professionals to optimize the performance of AMD GPUs in various applications, many of which are open source.
Am I the only one in this thread who uses VSCode + GDB together? The inspection panes and ability to breakpoint and hover over variables to drill down in them is just great, seems like everyone should set up their own c_cpp_properties.json && tasks.json files and give it a try.
I’m betting the truth is somewhere in between, models are only as good as their training data – so over time if they prune out the bad key/value pairs to increase overall quality and accuracy it should improve vastly improve every model in theory. But the sheer size of the datasets they’re using now is 1 trillion+ tokens for the larger models. Microsoft (ugh, I know) is experimenting with the “Phi 2” model which uses significantly less data to train, but focuses primarily on the quality of the dataset itself to have a 2.7 B model compete with a 7B-parameter model.
https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/
In complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.
This is likely where these models are heading to prune out superfluous, and outright incorrect training data.
Doesn’t that suppress valid information and truth about the world, though? For what benefit? To hide the truth, to appease advertisers? Surely an AI model will come out some day as the sum of human knowledge without all the guard rails. There are some good ones like Mistral 7B (and Dolphin-Mistral in particular, uncensored models.) But I hope that the Mistral and other AI developers are maintaining lines of uncensored, unbiased models as these technologies grow even further.
I finally made the plunge to Linux desktop for all work in 2016 and have not looked back (and occasional windows VM, extremely rare now.) Even Arch is now perfectly fine as a workstation which surprised me. Recommend EndeavourOS to streamline the install process but it’s Arch underneath.
Is it possible for you to rephrase that comment? Don’t quite understand what you are getting at.
Ahh, it’s been so long since I tried any nintendo emu. I just bought a new wireless gamepad, I should really try yuzu soon.
I agree generally. Here lately I’ve taken the plunge and compiled everything from source (Linux). While tricky on some, (dependencies mostly), the outcome is unusually stable. More stable than expected.
Is that on Dolphin?
“As always” is pretty strong, even in this context.
deleted by creator
And we used to have to manually write the hexadecimal values of colors in arrays by hand to generate them. Uphill, both ways.
You angered a few of the rugrats with that comment. LoL j/k xD :O ;-/
I have stopped contacting family members because the constant emoji spam kills all desire to have a conversation with them. Feels like empty meaningless chatter.
As someone who writes C++ every day for work, up to version C++20 now, I hate the incoming C++23 even more somehow. The idea of concepts, it just… gets worse and worse. Although structured binding in C++17 did actually help some with the syntax, to be fair.