ChatGPT is hilariously incompetent… but on a serious note, I still firmly reject tools like copilot outside demos and the like because they drastically reduce code quality for short term acceleration. That’s a terrible trade-off in terms of cost.
I enjoy using copilot, but it is not made to think for you. It’s a better autocomplete, but don’t ever let it do more than a line at once.
Yup, AI is a tool, not a complete solution.
As a software engineer, the number of people I encounter in a given week who either refuse to or are incapable of understanding that distinction baffles and concerns me.
That’s because it’s being advertised as a solution. That’s why you have people worried it’ll take their jobs when in reality it’ll let them do the job better.
The problem I have with it is that all the time it saves me I have to use on reading the code. I probably spend more time on that as once in a while the code it produces is broken in a subtle way.
I see some people swearing by it, which is the opposite of my experience. I suspect that if your coding was copying code from stack overflow then it indeed improved your experience as now this process is streamlined.
deleted by creator
Same as ChatGPT is better web search.
id rather search the web than chatgpt because i fact check it anyway
I don’t know if it does yet, but if ChatGPT starts providing source for every information, then it would make it much faster to find the relevant information and check their sources, rather than clicking websites one by one.
Yep, ChatGPT4 allows optional calls to Bing now.
It used to have a problem with making a claims that were not relevant to or contradicted its own sources, but I don’t recall encountering that problem recently.
Biggest problem with it is that it lies with the exact same confidence it tells the truth. Or, put another way, it’s confidently incorrect as often as it is confidently correct - and there’s no way to tell the difference unless you already know the answer.
it’s kinda hilarious to me because one of the FIRST things ai researchers did was get models to identify things and output answers together with the confidence of each potential ID, and now we’ve somehow regressed back from that point
did we really regress back from that?
i mean giving a confidence for recognizing a certain object in a picture is relatively straightforward.
But LLMs put together words by their likeliness of belonging together under your input (terribly oversimplified).the confidence behind that has no direct relation to how likely the statements made are true. I remember an example where someone made chatgpt say that 2+2 equals 5 because his wife said so. So chatgpt was confident that something is right when the wife says it, simply because it thinks these words to belong together.
ChatGPT what is the Gödel number for the proof of 2+2=5?
Gödel numbers are typically associated with formal mathematical statements, and there isn’t a formal proof for 2+2=5 in standard arithmetic. However, if you’re referring to a non-standard or humorous context, please provide more details.
Of course I don’t know enough about the actual proof for it to be anything but a joke but there are infinite numbers so there should be infinite proofs.
there are also meme proofs out there I assume could be given a Gödel number easily enough.
they drastically reduce code quality for short term acceleration.
Oh boy do I have news for you, that’s basically the only thing middle managers care about, short tem acceleration
But LinkedIn bros and corporate people are gonna gobble it up anyways because it has the right buzzwords (including “AI”) and they can squeeze more (low quality) work from devs to brag about how many things they (the corporate owners) are doing.
It’s just a fad. There’s just a small bit that will stay after the hype is gone. You know, like blockchain, AR, metaverse, NFT and whatever it was before that. In a few years there will be another breakthrough with it and we’ll hear from it again for a short while, but for now it’s just a one trick pony.
I always forget Metaverse is a thing.
deleted by creator
It was never a thing.
deleted by creator
I should’ve elaborated… It’s supposed to be a vast seamless network of interconnected spaces isn’t it, but that doesn’t exist yet. I think they jumped the gun in that everyone thought Horizons was it. It’s only just starting to be built.
I actually had ‘was’ first then edited after Googling. It appears it is still a thing but can’t really tell for what.
Is there really any utility for blockchain and NFTs?
The hilarious thing about blockchain is that the core concept is actively making the whole thing worse. The matrix protocol is sort of essentially blockchain without the decentralized ledger part, and it’s so vastly superior in every single way.
NFTs just show how fundamentally dumb blockchains are, if you skip the decentralized ledger bit then you never need to invent NFT functionality in the first place…
is sort of essentially blockchain without the decentralized ledger part
So a [Merkle tree](http://www…com/ https://en.wikipedia.org/wiki/Merkle_tree)?
i’m not sure whether matrix uses that, what i’m talking about is how it does the things everyone finds neat about blockchains without the inherent downsides like massive power usage and EVERYONE having to replicate the ENTIRE ledger.
i know IPFS uses merkle trees though, and hilariously blockchains largely rely on that to actually store any significantly sized data.
Well, if you stop listening to people who think it’s a way to get really rich really fast (which it obviously isn’t), cryptocurrencies are quite useful. International transfers are so much cheaper and easier with them.
Yes, you could, for example, use it to manage who is allowed to park in a garage, anonymously. The owner of a parking spot NFT can unlock the door from the outside. Stuff like that.
However, it’s also possible to do that with a small web application. Just payments and transfer of the parking spots are less free and it’s not decentralized.
The proton vpn people are either using or working on using block chain as a sort of email verification. Iirc it won’t have any cost or change in usage to the consumer, just an added layer of security. I’m not smart enough to understand how but it sounds neat.
It sounds like bullshit
I may have misread, Its not clear to me if it’ll be a staple of their free email service or will be added as part of their paid subscription to all their services but it still sounds neat. It very well could be bullshit but I’m interested in learning more.
https://www.tomsguide.com/news/proton-mail-to-use-blockchain-to-verify-recipients-email-addresses
I disagree, because unlike those things, ai actually has a use case. It needs a human supervisor and it isn’t always faster, but chat gpt has been the best educational resource I’ve ever had next to YouTube. It’s also decent at pumping out a lot of lazy work and writing so i don’t have to, or helping me break down a task into smaller parts. As long as you’re not expecting it to solve all your problems for you, it’s an amazing tool.
People said the same things about 3d printing and yeah, while it can’t create literally everything at industrial scale, and it’s not going to see much consumer use, it has found a place in certain applications like prototyping and small scale production.
i’d argue that chat gpt is mostly great at taking human bullshit tasks from humans, who dontwant to dothe bullshit, like regurgitating the text from a textbook in different words, writing cover letters for job applications, that are often machine analyzed for buzzwords anyways.
So its use case only exist in the domain of bullshit tasks that only exist to occupy two people without any added value.
Yeah, that is one thing it can do, but it’s not the only thing it does. I’m not sure how to get my point across well but, just because it gives you the wrong answer 25% of the time doesn’t mean it’s useless. In whatever you ask it to do, it often gets you most of the way there as long as you know how to correct it or check its work. The ability to ask specific or follow-up questions when learning something makes it invaluable as a learning tool (if you’re teaching yourself that is (ie. If you’re a university student)). It’s also very useful when brainstorming ideas or helping you approach a problem from a different angle. I can also ask it questions that are far more specific than what a search engine would get me.
It really comes down to if the human operator knows how (and when) to use it properly.
i never trust that information id rather learn with a book videos or just from websites if i ever use it i always fact check it. never blindly trust a computer
That was an integral part to this whole thing, you always fact check the ai. I said this in both of my comments.
I totally get preferring textbooks or videos though. I just find that the ai saves me time since i can ask specific questions about things, and it often gives me concise information that i understand more quickly.
Yeah, they think it can turn a beginner dev into an advanced dev, but really it’s more like having a team of beginner devs.
It’s alright for translation. As an intermediate dev, being able to translate knowledge into languages I’m not as familiar with is nice.
I’m still convinced that GitHub copilot is actively violating copyleft licenses. If not in word, then in the spirit.
Removed by mod
they drastically reduce … quality for short term acceleration
Western society is built on this principle
Tell me about it…
I left my more mature company for a startup.
I feel like Tyler Durden sometimes.
How you liking it? How many years have you aged in the months working at your startup?
My hairline has started receding very rapidly. There’s there’s these fine hairs all over my desk, and I see the photo I took when joining directly before turning on my camera every meeting.
Doesn’t sood good at all. I’m sorry to hear that, friend. I really hope there’s enough upsides there compared to working at a more mature company for you.
Sort of. Nobody’s cutting corners on aviation structural components, for example. We’ve been pretty good at maximizing general value output, and usually that means lower quality, but not always.
I’m going to say that’s the exception that proves the rule, assuming they were structural parts and not a minor controller chip for de-icing or something.
The company themself announced it without being prompted, and if whoever introduce these unapproved parts into a small number of engines is caught there’s going to be real hell to pay. The stuff that stops you from falling out of the sky is serious business, and is largely treated as such.
On the other hand, a software function that’s hacked together and inefficient will just fly below the radar, and most people will prefer two cheap outfits to one that’s actually well made for the same price, so quality goes right out the window.
I’ve asked ChatGPT to fix simple bugs that were in the code it’s given me. It usually doesn’t understand that there’s a problem and just rewrites it with different variables names.
It’s helped me a bit with resolving weird tomcat/Java issues when upgrading to RHEL8, though. It didn’t give me an answer, but it gave me ideas on where to look (in my case I didn’t realize fapolicyd replaced selinux)
That’s the point - you have the expertise to make proper sense of whatever it outputs. The people pushing for “AI” the most want to rely on it without any necessary expertise or just minimal efforts, like feeding it some of your financial reports and have generate a 5-year strategy only to fail miserably and have no one to blame this time (will still blame anyone else but themselves btw).
It’s not the most useless tool in the world by any means, but the mainstream talk is completely out of touch with reality on the matter, and so are mainstream actions (i.e. overrelying on it and putting way too much faith into it).
Yeah, it can speed up the process but you still have to know how to do it yourself when it inevitably messes something up.
An unpopular opinion, I am sure, but if you’re a beginner with something - a new language, a new framework - and hate reading the docs, it’s a great way of just jumping into a new project. Like, I’ve been hacking away on a django web server for a personal project and it saved me a huge amount of time with understanding how apps are structured, how to interact with its settings, registering urls, creating views, the general development lifecycle of the project and the basic commands I need to do what I’m trying to do. God knows Google is a shitshow now and while Stackoverflow is fine and dandy (when it isn’t remarkably toxic and judgmental), the fact is that it cuts down on hours of fruitless research, assuming you’re not asking it to do anything genuinely novel or hyper-specific.
It helps a complete newbie like me get started and even learn while I do. Due to its restrictions and shortcoming, I’ve been having to learn how to structure and plan a project more carefully and thoughtfully, even creating design specs for programs and individual functions, all in order to provide useful prompts for ChatGPT to act on. I learn best by trial and error, with the ability to ask why things happened or are the way they are.
So, as a secondary teaching assistant, I think it’s very useful. But trying to use the API for ChatGPT 4 is…not worth it. I can easily blow through $20 in a few hours. So, I got a day and a half of use out of it before I gave up. :|
I predict that, within the year, AI will be doing 100% of the development work that isn’t total and utter bullshit pain-in-the-ass complexity, layered on obfuscations, composed of needlessly complex bullshit.
That’s right, within a year, AI will be doing .001% of programming tasks.
Can we just get it to attend meetings for us?
Legitimately could be a use case
“Attend this meeting for me. If anyone asks, claim that your camera and microphone aren’t working. After the meeting, condense the important information into one paragraph and email it to me.”
Here is a summary of the most important information from that meeting. Since there were two major topics, I’ve separated them into two paragraphs.
- It is a good morning today.
- Everyone is thanked for their time. Richard is looking forward to next week’s meeting.
The rest of the information was deemed irrelevant to you and your position.
Holy cow! You’ve done it! You could wrap this (static text block) in a web API and sell it.
Edit: /s, I guess. But that text really is easily an 80% solution for meeting summaries.
Hell yes! I’ll join the front of the hype train if they can demo an AI fielding questions while a project manager reviews a card wall.
Y’know… that seems reasonable. I’d place my bet that there’d be something good enough in only a few years. (Text only, I’d bet)
Big companies will take 5 years just to get there.
Fellow freelancer, I see.
Engineering is about trust. In all other and generally more formalized engineering disciplines, the actual job of an engineer is to provide confidence that something works. Software engineering may employ fewer people because the tools are better and make people much more productive, but until everyone else trusts the computer more, the job will exist.
If the world trusts AI over engineers then the fact that you don’t have a job will be moot.
People don’t have anywhere near enough knowledge of how things work to make their choices based on trust. People aren’t getting on the subway because they trust the engineers did a good job; they’re doing it because it’s what they can afford and they need to get to work.
Similarly, people aren’t using Reddit or Adobe or choosing their cars firmware based on trust. People choose what is affordable and convenient.
In civil engineering public works are certified by an engineer; its literally them saying if this fails i am at fault. The public is trusting the engineer to say its safe.
Yeah, people may not know that the subway is safe because of engineering practices, but if there was a major malfunction, potentially involving injuries or loss of life, every other day, they would know, and I’m sure they would think twice about using it.
What’s being discussed here is the hiring of engineers rather than consumer choices. Hiring an engineer is absolutely an expression of trust. The business trusts that the engineer will be able to concretely realize abstract business goals, and that they will be able to troubleshoot any deviations.
AI writing code is one thing, but intuitively trusting that an AI will figure out what you want for you and keep things running is a long way off.
In my hometown there’s two types of public transit: municipal and commercial. I was surprised to learn that a lot of folk, even the younger ones, only travel by former, even though the commercials are a lot faster, frequent and more comfortable. When asked why, the answer is the same: If anything happens on municipal transport - you can sue the transport company and even the city itself. If anything happens on a commercial line - there’s only a migrant driver and “Individual Enterpreneur John Doe” with a few leased buses to his name. Trust definitely plays a factor here, but you’re right that it’s definitely not based on technical knowledge.
It’s more thrust than trust.
Hmm. I’ve never thought about it that way. It took a long time for engineering to become that way IIRC - in the past anybody could build a bridge. The main obstacle to this, then, is that people might be a bit too risk-tolerant around AI at first. Hopefully this is where it ends up going, though.
As someone who works on the city side of development review, I can firmly say I’ll trust a puppy alone with my dinner than a Civil Engineer.
Are civil engineers known to eat off people’s plates?
Think they confused it with uncivil engineers
Very interesting point. Probably the most pressing problem then is to find a way for the black box to be formally verified and the role of AI engineers shifts to keeping the CI\CD green.
I just used copilot for the first time. It made me a ton of call to action text and website page text for various service pages inwas creating for a home builder. It was surprisingly useful, of course I modified the output a bit but overall saved me a ton of time.
Copilot has cut my workload by about 40% freeing me up for personal projects
Copilot is only dangerous in the hands of people who couldn’t program otherwise. I love it, it’s helped a ton on tedious tasks and really is like a pair programmer
Yeah it’s perfect for if you can distinguish between good and bad generations. Sometimes it tries to run on an empty text file in vscode and it just outputs an ingredients list lol
Copilot has cut my personal projects by about 40% freeing me up for work
See, your mistake was telling your employer that you have free time.
id argue it’s more work to get chatgpt to suggest a CTA of “Download now” or “Learn more” than it is to type it by hand.
I think the correct response is “Wow. Has your mom seen it? Send her the link.”
This is so evil I love it
why? wouldn’t she simply be unable to open it too?
If it’s on the same device, it would open a page showing her what is in the downloads folder of his user. I think the joke is he might have something embarrassing there, but I wouldn’t know since I only have things there when I’m downloading them and then immediately file them away to some actual hyperspecific folder
why would they be on the same device? how can they be on the same device at the same time? also if she gets the full link it would only show her the html page, not the rest of the folder
I think it’s both?
- Send link to her but it doesn’t work because it’s only available on the local machine
- Show the website by first opening the downloads folder then clicking the website
You can bring your device to people. Most people use laptops now instead of desktops
The joke is that ben’s mom will ask him how to open it. Ben thinks that it is possible, ben will have a bad time.
AI is only as good as the person using it, like literally any other tool in human existence.
It’s meant to amplify the workload of the professional, not replace them with a layman armed with an LLM.
deleted by creator
What’s most likely to happen is that the idiot on the other side would start complaining to the AI and asking to talk to a manager.
And maybe he gets the right hallucination and the AI starts behaving as a manager
This AI thing will certainly replace my MD to HTML converter and definitely not misplace my CSS and JS headers
MD to HTML? You are a blessing dude.
Yea of course! I can link a bitbucket for the file if you’d like. It’s hardcoded to my file structure, but should still be useful if you need something like it
Not really a progrmmer myself (just bash and a bit of python and html), idk what a bitbucket is buy i’d take it just in case if I decide to learn it. Thx a lot!
Tbf I don’t really wanna do ops work. I barely even wanna do DevOps. Let me just dev
You don’t need to convince the devs, you need to convince the managers.
ಠ_ಠ
Wow, there is a lot of pearl-clutching and gatekeeping ITT. It’s delicious!
Don’t forget that GPT4 was getting dumber the more it learned from people.
On a more serious note, ChatGPT, ironically, does suck at webdev frontend. The one task that pretty much everyone agrees could be done by a monkey (given enough time) is the one it doesn’t understand at all.
I don’t think it’s very useful at generating good code or answering anything about most libraries, but I’ve found it to be helpful answering specific JS/TS questions.
The MDN version is also pretty great too. I’ve never done a Firefox extension before and MDN Plus was surprisingly helpful at explaining the limitations on mobile. Only downside is it’s limited to 5 free prompts/day.
Chat gpt is also great if you have problems with Linux. It is my nr 1 trouble shooting tool.
GPT 4 Turbo is actually much better than GPT 3.5 and 4 for coding. It has a way better understanding of design now.
These morons are probably going to train AI wrong so job security for the next 100 years.
The only thing ChatGPT etc. is useful for, in every language, is to get ideas on how to solve a problem, in an area you don’t know anything about.
ChatGPT, how can I do xy in C++?
You can use the library ab, like …That’s where I usually search for the library and check the docs if it’s actually possible to do it this way. And often, it’s not.
deleted by creator
It’s good at refactoring smaller bits of code. The longer the input, the more likely it is to make errors (and you should prefer to start a new chat than continue a long chat for the same reason). It’s also pretty good at translating code to other languages (e.g. MySQL->PG, Python->C#), reading OpenAPI json definitions and creating model classes to match, and stuff like that.
Basically, it’s pretty good when it doesn’t have to generate stuff that requires creating complex logic. If you ask it about tasks, languages, and libraries that it has likely trained a lot on (i.e. the most popular stuff in FOSS software and example repos), it doesn’t hallucinate libraries too much. And, GPT4 is a lot better than GPT3.5 at coding tasks. GPT3.5 is pretty bad. GPT4 is a bit better to Copilot as well.
I’ve found it great for tracking down specific things in libraries and databases I’m not terribly familiar with when I don’t know the exact term for them
Bruh this cracked me up
Bruh good to hear bruh, bruh