I asked ChatGPT to create and code paranormal investigation gear for me
I wouldn't say that it went "well."
A couple days ago, I watched a video about how a YouTuber with no coding experience was able to make a simple platform game using ChatGPT. Basically, he told ChatGPT-4 what he wanted it to do and it gave him code. He kept testing the code, asking questions about additional features and for help debugging, and eventually he ended up with a basic, working game that didn't look too shabby. He said it took him about 8 hours and it cost about $20 (he had to pay to use the API because he exceeded the limit of questions that he could ask ChatGPT-4).
Now, I think there are many ethical issues with large language models (LLMs, often just called "AI"), and I am a bit of a pessimist because of the way they have been rolled out, overhyped, misunderstood, and deployed. I even feel torn on whether just writing about uses for LLMs is unethical.
But, just like last time I tested out some "AI" tools, I was curious to try this out for paranormal investigation devices and programs. Call me a cat, because I have a bad habit of following my curiosity regardless of other risks and considerations.
This was purely an experiment; my success was . . . mixed. It's also important to remember here that LLMs are, for all intents and purposes, a fancy form of autocorrect.
So my experience makes sense to me: I got better answers for more straightforward things like code, which has tons of high-quality examples online for the model to be trained on, and slightly nonsensical answers for more obscure projects.
Here's what I asked it for, and here's what I got
I prompted it for three different things, going from more obscure/specific/difficult questions that required both C++ code and information about components and hardware assembly, to a pretty simple Python coding question.
For reference, I wouldn't say that I "know" how to code. My knowledge of Python, HTML, and CSS is so rudimentary that it's practically nonexistent, and I've never written or played around in any other language.
My electronics knowledge level is beginner level at best. I can do basic soldering, have watched a lot of YouTube videos about circuits and Arduinos, and have read through a bunch of how-to's, but that's it.
So I figured I'm a good test case to see whether LLMs can be helpful to people who know next to nothing about a subject.
Instructions (components, code, and steps to build) for some biodata sonification device builds.
At this point, I've read through instructions for a handful of variations of a biodata sonification device, and I was curious what information ChatGPT-3 would give me about building a similar device.
I haven't attempted this build yet and don't have the technical knowledge to spot most issues for this sort of thing; if I knew this stuff already, I wouldn't be asking a LLM for advice. But even as a novice, some of ChatGPT's instructions seemed . . . off.
First off, I told it the basic functionality of the existing, seemingly most popular biodata sonification device build, and I asked it to give me instructions for creating one using an Arduino Uno.
It initially gave me extremely basic instructions (which were so vague that I could have written them off the top of my head right now, having never attempted the build.) No specific components were mentioned; it was fuzzy, obvious stuff like "Attach the electrodes to the leaves and connect them to the Arduino Uno board using wires." Classic LLM bullshit.
Next, I asked it to write me some code. It looked . . . okay . . . to me, a person who doesn't know C++ and hasn't actually used an Arduino Uno before. Let's say the code looked somewhat plausible when I read through it. I could see what it was doing, but had no idea if that was the right way to do it or if it'd work with the components.
The code mentioned sensor input, but the LLM hadn't given me any information about what sorts of sensors we were looking at. I pushed it and it really, really didn't want to give me specifics about sensors.
When it finally gave me product names for sensors, they seemed to be mostly hallucinations. I looked up the "sensors," one of which cost upwards of $1,000 and was, in fact, an unrelated medical device. The LLM also included some sensors for hobbyist electronics projects, but those components didn't necessarily do anything related to this build.
I asked it for schematics, and it repeatedly gave me dead Imgur links. Even when I told it the link was broken and I asked for an unbroken link, it just sent me another, different broken link. Since ChatGPT isn't connected to the internet, it's possible that it was trying to retrieve a schematic that had existed when it was trained but is no longer there.
But, based on what I've read about this particular LLM, I doubt that's the case. I'd be somewhat surprised if Imgur hosted schematics for such a specific, somewhat uncommon project (especially when I know that the main guy who's done the work on this, Sam Cusumano, has his own website, so it wouldn't make sense for schematics to be hosted separately on Imgur). So I think this was another hallucination.
At some points, the LLM obviously "knew" it was providing false information to me. Though, again, it's important to keep in mind here that LLMs are basically just fancy predictive text; it can't actually "know" or "think" anything. So when I called out inaccuracies and it apologized and agreed with me, there's a part of me that wonders if it was capable of knowing whether it was right or wrong. (After all, if it "knew" it was wrong, then why did it give me the wrong answer to begin with?)
Here's a concrete example: When I asked it for a cheap sensor that was appropriate for the project (after it suggested that expensive medical device claiming it should be used for the project), it told me I could use a specific temperature probe that it claimed "can also measure electrical conductivity."
I looked it up—and unless my reading comprehension is way worse than I thought it was—the temperature probe is just a temperature probe.
I replied, "It doesn't look like the Vernier Stainless Steel Temperature Probe measures electrical conductivity."
It backtracked immediately: "I apologize for the confusion. You are correct, the Vernier Stainless Steel Temperature Probe does not measure electrical conductivity. It is primarily used to measure temperature." It then sent me the model name of another temperature probe that (you guessed it) is just for measuring temperature.
Eventually, I fed it some other components that I wanted it to use, based on the actual components that are listed on some of the DIY builds, such as a 555 timer IC. It gave me slightly more plausible information, though I'm still too clueless to guess whether it would work or not.
The verdict on this one: ChatGPT was supremely unhelpful and unreliable.
Instructions (components, code, and steps to build) for several variations of a Ovilus-like device I've been thinking of making.
I decided to try something slightly simpler, asking ChatGPT to assist with some other gadgets I've been spec'ing out in my head.
I ran into the same sensor issue here. In my specifications, I asked for the input to be changes in EMF, but I struggled to get it to direct me to what sort of sensor I should use. (It was vague, just saying "EMF sensor.") I finally got it to send me some specific sensors, though after researching them, it was unclear to me whether they'd work. (They weren't components used in DIY EMF sensor builds I'd seen online, at least.)
I tried a simpler version based on temperature and with a lot of the functionality stripped back. Interestingly, it gave me a circuit diagram for that one ("drawn" in a code block) rather than a fake imgur link.
However, the LLM didn't seem to quite understand my specifications for the device. Maybe I could have prompted it to understand eventually, but at that point I wanted to move on to something simpler that I could test out right away.
My verdict: ChatGPT had the vibe of a Hollywood screenwriter coming up with a vague technological MacGuffin that will be plausible enough that the characters can talk about its inner workings, but not likely a device that would work in real life. (Real "hacking the mainframe" vibes.)
Code for a spirit box using local mp3s.
I know this has been done before, but I thought a spirit box program would be an interesting test, since it's simple and there's no hardware required. I downloaded some old timey radio broadcast mp3s from Archive.org and then asked for some code.
This one . . . kinda worked. I ended up going through about seven or eight iterations with the LLM, but I couldn't quite get it to do exactly what I wanted it to.
Though ChatGPT is supposed to "remember" things you said earlier in the conversation, it kept "forgetting" basic stuff like the fact that I was using mp3s rather than wav files. It took some nudging to stay on track; basically, anytime it would be more convienent to drift away from my specifications, it did. So that's something to stay vigilant about if you're using LLMs for this sort of thing.
However, I think that if I spend another 15-45 minutes on this (famous last words), I'll be able to get the code to do what I want it to. Part of what was tripping things up was that my antivirus wasn't playing nice when I tried installing ffmpeg. At that point, I was about at the end of my time (and patience) for the day, so I walked away.
But I'll probably return to this one another time, since I do actually think it'll work once I install ffmpeg and maybe tweak a couple things.
The verdict: ChatGPT seems promising for use in silly little personal coding projects.
Takeaways
Remember to take these thoughts with a grain of salt, since I'm coming at it from the point of view of "relatively clueless person who wants the LLM to do some tech stuff for them" rather than someone with deep technical expertise.
Because of that, it's a bit hard for me to separate out hallucinations and errors from accurate info. As Per Axbom pointed out a couple weeks ago:
One problem with the diligent, incorrect answers is of course that it is difficult to know that ChatGPT is wrong unless you already know the answer yourself.
That's a big issue with LLMs. They're known for being inaccurate, but if you already knew all the answers to something, why would you be feeding prompts into an LLM to get more info?
Typically, people who aren't knowledgeable about a topic consult LLMs, not people who are already subject matter experts. (I can say that when I have asked ChatGPT about things I'm very knowledgeable about, like New York City urban legends, it's given me answered that might seem right to the uninitiated, but that were deeply incorrect and full of hallucinations. It also refused to provide me with citations, of course. Though I've read about it making up citations when pressed by journalists, so I wouldn't trust any citation from am LLM that I couldn't verify myself.)
So anyway, as someone with limited technical knowledge, it seems to me that:
- ChatGPT doesn't seem good at coming up with instructions for niche hobbyist DIY builds. The code portions seemed promising, but it repeatedly hallucinated when it came to physical sensors, schematics, and circuits. When I pointed out its inaccuracies, it quickly backtracked without explaining why it told me the wrong thing in the first place. And who knows how many incorrect things it told me that I just don't know enough to notice.
- It does seem to have some utility when coding, however. The Python code it gave me for the spirit box was clear and helpful. When I encountered bugs, I sent it the error text and it told me what to do to fix it. When I found that the initial versions it sent me weren't quite doing what I wanted it to do, it adjusted the code for me. It was also pretty good about explaining everything, through comments in the code and instructions outside of the code.
I think LLMs have a huge number of problems and should not be used to teach and create things in general. (Heck, it can barely do basic math consistently and correctly; my trusty Sharp EL-377T manages to best it there.)
But I do see how LLMs might be used as a supplemental tool for learning to code. It seems helpful for someone like me; I have a very, very rudimentary understanding of Python, and I found learning it from the basics a little boring.[^1] But I could see learning more by asking a LLM to write code for me and modifying that.
That tempered positivity aside, I do recommend proceeding with caution when it comes to asking LLMs to write code for you, especially if you're doing something important, proprietary, or work-related. Maybe this goes without saying, but the "AI" hype is such that I feel like I gotta say it anyway. There are serious security considerations to keep in mind when doing this sort of thing. I felt fine using it for a goofy project on my personal machine, but, you know, use your head when it comes to LLMs (and any overhyped tech).
Anyway, if I get the spirit box code to do what I want it to, I'll post it.
[^1] I have trouble focusing on things when there isn't a clear purpose in the short-ish term. When I was learning Python, everything I wanted to do with it was so advanced that it wasn't enough to motivate me to put in the time to learn enough to actually accomplish anything. (See: ADHD.)