Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To this day i am still wondering what kind of code people write that chatgpt can possibly help with. All my attempts lead to garbage and i would spend more time fixing the output of the chat bot than writing the actual code. It does help with some documentation. But even that has glitches.


No one uses it to generate code. Really. Talk to people who actually use it and listen to what they say… they use it to help them write code.

If you try to generate code, you’ll find it underwhelming, and frankly, quite rubbish.

However, if you want an example of what I’ve seen multiple people do:

1) open your code in window a

2) open chatgpt in window b (side by side)

3) you write code.

4) when you get stuck, have a question, need advice, need to resolve an error, ask chatgpt instead of searching and finding a stack overflow answer (or whatever).

You’ll find that it’s better at answering easy questions, translating from x to y, giving high level advice (eg. Code structure, high level steps) and suggesting solutions to errors. It can generally make trivial code snippets like “how do I map x to y” or “how do I find this as a regex in xxx”.

If this looks a lot like the sort of question someone learning a new language might ask, you’d be right. That’s where a lot of people are finding a lot of value in it.

I used this approach to learn kotlin and write an IntelliJ plugin.

…but, until there’s another breakthrough (eg. Latent diffusion for text models?) you’re probably going to get limited value from chatgpt unless you’re asking easy questions, or working in a higher level framework. Copy pasting into the text box will give you results that are exactly as you’ve experienced.

(High level framework, for example, chain of thought, code validation, n-shot code generation and tests / metrics to pick the best generated code. It’s not that you cant generate complex code, but naively pasting into chat.openai.com will not, ever, do it)


That matches my experience. It's a sort of shortcut to the old process of googling for examples and sifting through the results. And those results, I didn't typically cut and paste from them, or if I did, it was mostly as a sort of a scaffold to build from, including deleting a fair amount of what was there.

Many times it works really well, and it surfaces the kind of example I need. Sometimes it works badly. Usually when it's bad, going to the google/sift method has similar results. Which I guess makes sense, it couldn't find much to train on, so that's why it's answer wasn't great.

One area it works really well for me is 3rd party apis where their documentation is mostly just class/function/etc. ChatGPT generally does a good job of producing an orchestrated example with relevant comments that helps me see the bigger picture.


Me too. As someone who used to be a dev but hasn't written code professionally in twelve years or so, it was such an amazing accelerant. My iteration loop was to contextualize it (in English and in code), ask how to do a thing, look at its response, tweak it, execute, see what happened, alter it some more.

The fact that it usually had errors didn't bother me at all -- it got much of the way there, and it did so by doing the stuff that is slowest and most boring for me: finding the right libraries / functions / API set up, structuring the code within the broader sweep.

Interesting side note: un-popular languages, but ones that have been around for a long time and have a lot of high-quality and well-documented code / discussion / projects around, are surprisingly fecund. Like, it was surprisingly good at elisp, given how fringe that is.


With GPT-4, you can often just paste the error message in without any further commentary, and it will reply with a modified version of the code that it thinks will fix the error.


And then you waste time fixing the error the "fix" gpt introduced. Clever.


I've used it on this side project for:

- A rough crash course in F#. I'll say "what's the equivalent in F# of this C# concept?". It will often explain that there is no direct concept, and give me a number of alternative approaches to use. I'll explain why I'm asking, and it'll walk through the pros/cons of each option.

- Translating about 800 lines of TypeScript JSON schema structures to F#. A 1:1 translation is not possible since TypeScript has some features F# doesn't, so ChatGPT also helped me understand the different options available to me for handling that.

- Translating psuedo-code/algorithms into idiomatic F# as a complete F# beginner. The algorithms involve regex + AST-based code analysis and pattern matching. This is a very iterative process, and usually I ask for one step at a time and make sure that step works before I move onto the next.

- Planning design at a high-level and confirming whether I've thought through all the options carefully enough.

- Adding small features or modifications to working code: I present part of the function plus relevant type definitions, and ask it for a particular change. This is especially useful when I'm tired - even though I could probably figure it out myself, it's easier to ask the bot.

- Understanding F# compiler errors, which are particularly verbose and confusing when you're new to the language. I present the relevant section of code and the compiler error and 90% of the time it tells me exactly what the problem and solution is; 5% of the time we figure it out iteratively. The last 5% tends I have to stumble through myself.

- Confirming whether my F# code is idiomatic and conforming to F# style.

- Yes it makes mistakes. Just like humans. You need to go back and forth a bit. You need to know what you're doing and what you want to achieve; it's a tool, not magic.

Note: this is the commercial product, ChatGPT-4. If you're using the free ChatGPT 3.5, you will not be anywhere near as productive.


I used chatgpt4 to generate python code which generates c++ code for a hobby project of mine using a library I've never used before. The iteration speed is ridiculously good and no one in any of the IRC or discord channels I visited pointed me even in the general direction of such a simple solution.

https://chat.openai.com/share/d041af60-b980-4972-ba62-3d41e0... https://github.com/Mk-Chan/gw2combat/blob/master/generate_co...


Programmers[1] have a complexity bias that interferes with the idea that LLMs can write useful code.

I had a problem last week where I wanted to extract sheet names and selection ranges from the Numbers app for a few dozen spreadsheets. ChatGPT, came up with the idea of using Apple script and with a but of coaxing wrote a script to do it. I don't know ApplesScript and I really don't want to learn it. I want to solve my problem and its 10 lines of AppleScript did just that.

We're nowhere near LLMs being capable of writing codebases, be we are here for LLMs being able to write valuable code because those concepts are orthogonal.

1. some, most


I am confused - the "code" you described is likely a google search away. Well I mean google has become useless but when it worked it was able to find such stuff in one search. So really all I am getting is that gpt is a better google.


I'm not quite understanding. It sounds like I was supposed to use google search before it stopped working?

A great counterexample would be a google query which includes the answer in the results.


It’s good at a beginner, early intermediate level when you need help with syntax and structuring basic things. It’s an excellent helper tool at that stage.

But it’s obvious outside of jr dev work and hobby projects there’s no way it could possibly grasp enough context to be useful.


Stage? A lot of developers don't realize that they're all the same personality type which is good at particular things. LLMs give everyone else this advantage. You just don't realize it yet because you were never aware of the advantage in the first place.


> they're all the same personality

Yeah usually it was the shittiest managers I ever met that shared this belief. It sounds like they all repeat the same thing - gpt is a better google.


Are you calling me a shitty manager? Would you say this to my face? What's wrong with you?


I am not, but there's a tendency among those types that categorise people into narrow sets that they can understand. Also your statement doesn't much make sense. Being a good developer means you understand a wide range of issues, not just spelling in a language and adding if statements. The combination of personalities vary wildly. To be fair, LLMs if anything, will help developers become better managers, simply because developers understand what needs to be done. Instead of decyphering what someone meant by requesting a vague feature, you can ask a statistical system - an ai as some call it - what the average joe wants. And then get it done.


> I am not, but there's a tendency among those types that categorise people into narrow sets that they can understand

Those types huh


I'm not really following what you're trying to say. What is that personality type, and what are the things its good at? What is the LLM advantage?

Not saying your post is devoid of any substance, just trying to educate myself on my blindspots


There are three types of intelligence: intuitive, cognitive and narrative.

Tech has, for years, seen people with cognitive/narrative intelligence as the people who are actually smart.

LLMs help the intuitive people reach the level of the cognitive/narrative people. Cognitive/narrative people can't really understand this in the same way the intuitive people are bad at syntax or database structure. The steamrolling will be slow and merciless.


Could you give a concrete example of where ChatGPT is likely to provide a competitive advantage to intuitive people with weak cognitive/narrative capabilities?


Regex.


Did some ETL with Python. ChatGTP got it right 99%. And, remarkably understood a public API that I was feeding from, which used 2-letter abbreviations.


If you are trying to do something with an api that you have no experience with it will get you up and running quickly. e.g. How do I get all kubernetes configmaps in a given namespace older than n days in go? It gives you the bones about how you create and configure a client and query kubernetes to get the information that you are looking for. It's much quicker than googling and parsing a tutorial.


I’ve have used ChatGPT for

- De-obfuscate a obfuscated JS code

- Unminify a JS code. Asked it to guess function names based on the functionality

- Work with it like a rubber duck to plan out the possible solutions to a code problem

- To suggest function names based on the functionality

- To name repos

- Modify a piece of Go code to add specific functionality to it. I don’t know to write Go; I can read it and grok the high level functionality


It's what Clippy always wanted to be.


Treat it like your rubber duck.


Are you using 3.5 or 4?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: