What's your experience and feeling of using AI tools for coding

The only part that we can rely on, in today's AI architecture is it's pre-computed model. We may call it “a world model”, but then it's a pretty poor one, because it's static and unmodifiable after creation.

Or we may call that kaleidoscope-like “attention process” is the world moder, a pretty poor one, because it's random and unreliable, easily led astray by different names of variables and/or functions.

And indeed, there are lots of attempts to add something to LLMs to simulate the world model… but then – that's where we started 60 years ago, why are we so sure that now we would, suddenly, become much better?

We are still missing the vital link: we know how to create world models (and knew that for decades) – but that process doesn't scale because only humans did that till now, we know how to produce drunk-scribe who knowns a lot but is inebrated, thus couldn't “think straight”… we have no idea how to connect these two things and even if they could be connected at all.

And that is how we end up in a position where usage of AI to create something, the “white-hat use of AI” is extremely hampered by that problem… but it's not an issue at all for nefarous usages! If you scam haven't worked and you couldn't swindle money from one victim with deepfake… there are millions of other potential victims!

I wonder why that discrepancy is never discussed… it's much more immediate and direct threat to humanity than imagined “rogue ASI”.

AI doesn't need to match human capabilties to become a dangerous threat to our civilization, that's something that's completely ignored in disccussion about dangers of AI.

The same way they will impact the rest of developers / programmers. Whatever that'll turn up to be.

At the risk of coming off as yet another contrarian: there is no AI. There is a buzzword hyped with no sense of moderation or restraint to the sky and above by the same folks who had tried to cram their AR/VR nonsense down the throat of each and everyone who'd listen; before pivoting to the crypto/blockchain/decentralized/Web3 before pivoting again to our shiny new fancy term.

"Intelligence" is an abstract, ephemeral, chiefly human concept. Some of the intelligence we are born with is innate: a baby doesn't need to predict the next N "tokens" most likely to follow their brains' inner UTF-8 "prompt" to seek the attention of a parent. Some of it is based on experience: you can't encode a career of 20 years coding, debugging, brainstorming, abd establishing viable communication channels with other people into an LLM/GPT trained on books and f* Reddit.

Every single word you're reading now is an abstract, ephemeral, chiefly human concept: a mix of experience + thought + emotion + history + all manner of associations with other concepts in turn. Language itself is a projection of our intellectual capacity. Not a fully-featured self-sufficient stand-alone container able of representing every facet of our collective knowledge on its own.

Whatever digitized UTF-8 based "intelligence" an LLM/GPT possesses has exceedingly little to do with the intelligence a human being is capable of. Why in the world would a bunch of self-obsessed f*s in their Silicon Country of Hype and B* choose to market and sell the former as in any way representative of the latter is as good of a guess of yours as it is of mine.

As long as we agree not to spit on the entirety of our neurobiological prowess just to make a bunch of glorified auto-complete peddlers alongside their VC sugar daddies happy: let's continue.

Define "normal". IT as an industry as a whole is one of the best representatives of the adage "the only constant in life is change". People used to talk to rubber ducks when figuring out why their code wouldn't work. Now they talk to whatever prompt box their YT/IG/TT feed has sold them on as "The Most Advanced AI Companion" out there. The work remains the same.

None of it happens without extensive brainstorming + design + understanding of the interactions in between the different parts involved. "Delegating" it to an "agent" involving a bunch of LLMs gradient-optimized into producing the most likely UTF-8 output from a given prompt means someone will need to check + understand + correct or re-prompt each and every bit.

How in the world would a brain-fried "prompt engineer" who has never written / shipped / debugged a piece of code on their own do that? Would they spin up another "agent" to check on the work of the previous slop machine? Who is going to check the work of this last one? Another "agent"?

You can't derive your own optimization function from hearsay and second-hand experience. People who have been in the industry for 20+ years will have little to no clue and/or interest in the problems of folks just starting out. Senior SWEs with 100/250/500k+ stashed in the bank have all the freedom in the world to play with and talk about whatever shiny toys others are pumping out. Unless you're one of them, drooling over each and every article or tweet they post will get you nowhere.

Don't flap your ears left and right. Choose the sector you're interested in. Track what's going on in it. Not someone's impression of what's going in it. Not someone's thought or reaction or hot take on whatever happened to hijack the attention span of a bunch of severely under-employed social media addicts who live and die by the amount of hype and drama they inject into their minds on a daily basis. Follow the raw data: as close to the source as you can get. Else you'll waste all the time and focus in the world on what has ultimately nothing to do with your own life whatsoever.

Be mindful of incentives. Reading a post of a CEO behind yet another LLM wrapper talking about how "AI WILL CHANGE THE WORLD" will do you as much as good as listening to your barber telling you why you should definitely get a haircut, from him, three times a day. Same goes for listening to anyone who "prides" themselves on "never" using a GPT because they're "above" any and all prompting. Do your own research. Conduct your own testing. Make your own choice.

4 Likes

AI does a great job of the declarative programming, where I can instead of formal declarations use informal ones in the natural language. However I am more concerned who has all ownership of thousand lines code AI generated. But, it isn't Rust, so I need to rise the topic in some other place. Ask AI where?

I think you have hit on a killer product idea. Rubber ducks enhanced with AI.

3 Likes

I like my ducks silent.

5 Likes

I can understand. It’s just that I have been getting all sorts of ideas from AI that had never heard of or may never have thought of. Maybe not correct but often food for thought. And there is nobody around here I can talk Rusty with. My partners/colleagues are still Python heads. My rubber ducks stay in the bathtub.

1 Like

In another discussion, someone asked me the links to those studies that showed why it was indeed a bad idea to use LLM-based AI to generate code. Since I had to retrieve them, I may as well post them here:

And two articles debunking the stats in a GitHub "article" about Copilot's code quality, perhaps less extreme than another one posted above on the same topic:

2 Likes

Read part of the 2nd 1st article and it felt wrong, maybe fully AI generated. (But it could well be my mistake.)

A few things that feel wrong are:

  1. 2/3 researchers are part of the company
  2. they keep qualifying “valid” responses w/o details
  3. the questions are taken from elsewhere and just adapted (feels unoriginal) and even those seem poor questions. And they still say they used AI for the questions? Seems to much help for that simple task.
  4. Section 4 starts with “When asked about the overall impact of AI tools on work.” and thats a paragraph?
  5. The pie chart of that section has 1 colour mismatch wrt the legend (orange and yellow are mixed)
  6. It may be correct but overall it felt sketchy to me, but I can be wrong
  7. I think it is still good to check those sources.

At this point it seems to me just trying them with some care may be easier.

That'd be the cherry on top! But who knows, maybe. :sweat_smile: They do admit the introduction and discussion were “produced with the assistance” of ChatGPT. Quite the recursive assessment process…

Heh. What did you find?

I could only find 2 of the 3 authors of "The Impact of AI Tools on Software Development" on LinkedIn. The lead author is still at the same position as professor (which must explain why his name is 1st; it's quite common with student papers), and Erick Ribeiro doesn’t seem to work for GitHub or Microsoft. It’s not clear exactly when the article was written, but it’s definitely later than 2020, so I assumed it was a follow-up to the work Ribeiro and Ana Carolina Oran did when they were students at Uni. But I’ve not investigated further than that.

Either way, I didn’t find the conclusion was shocking. The quality of the article is what it is; we see similar errors in the form and poor style in many research articles, even if they’re supposed to be peer-reviewed by external people. It’s often co-written by pressurized students and “corrected” by professors who think they master the prose better than others, so the result is sometimes a little ugly (been there, done that).

Yes, trying with care seems the wisest. Those things change quickly, even if I think the underlying technology will never allow those tools to be used reliably for problem-solving tasks in their current form. I’m doing regular tests to see the progress, but so far I can’t say the results would change my opinion. If anything, the issues feels more insidious.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.