Artificial Intelligence

I posted on this just now and if you’d like to see it, feel free, Main Page post on A.I., and I don’t want to come off as an alarmist but I have been finding the usage and increasing prevalance of A.IA. in various spaces to be both interesting and disturbing.

I have been messing with various models, including the ever popular ChatGPT and have had it “write” a few reviews in different styles of various games I am quite fond of and how accurate, quick and seemingly intelligent it can be with different prompts is unreal in one sense of the word. I am curious how this will continue to evolve ove time and if it will have any impact on gaming and other aspects of media.

1 Like

I hope this AI stuff isn’t like opening pandora’s box, so to speak.

1 Like

With the discussions that were had regarding it on the main page, I wouldn’t be too hopeful. I guess time will tell how or if this has any significant impact on gaming and other aspects of society both short-term and long-term.

made the Sega reboot of #TheLastOfUs with #midjourney

I want to play this!

A.I can be useful in medicine and other technical applications. Other than that, it’s just a trend.
In art, it’s a dangerous generator (“generator”, but just mixes other people stolen works) of awful things that people use just to save the money an artist would ask.
About this “intelligent AI who talks with you”… They just steal thousands of other people answers and repeat them like a parrot, so no worries, just a trend.


I feel that this response is similar to what is commonly said in the animal cognition field when something amazing is seen: “well, it was trained to do so”. How is that completely different to people learning, other that we know how we feel and we believe we are special for doing so?


The AI chatbots out there are basically like a REALLY advanced version of autocomplete that can predict the next string of text in a sequence really well. They are really impressive, but not quite truly intelligent in my eyes.

I don’t see them as a fad though—they are really useful for many things, such as programming and even just to bounce ideas off of it. However, it isn’t a replacement for search (It gives incorrect answers too often, sorry Microsoft). Generative AI such as the ones that make images has some potential uses too, but it needs some kind of legislation to prevent people from training their models on artists’ work without their consent.


Because this “AI” doesn’t learn as an animal. It just steals another people artwork a generate a random result. It can’t by any mean have any artistic intention even in its least, so it can’t, nor will be able to make art.
But it can destroy SO much…

1 Like

The thing is that just so many things about how animals (or us) learn, optimal ways to teach are unknown to us, and even finding a common definition of intelligence is difficult. And another added complication to the problem is that it actually doesn´t matter that much what things are, but how we perceive them, as there is little possibility for us to actually get the true essence of things considering we are always the first filter.

1 Like

That’s not the problem.
You give works inside the IA and it uses them. As long as they can’t do anything without stealing art, they shouldn’t exist.

I’m okay with an IA who shows everyone which art is using. But that doesn’t exist, because we all will see how it is using unauthorized art of other people. And that is a true cancer for the art world.
People say “humans also learn with other people art”, but we don’t need to actually use that art. IA is just camouflaged plagiarism.
When anyone discover that one people art is a copy of another one, that artist is criticized. I don’t see why these “IAs” shouldn’t bee too.

I completely disagree.
The use of another people art by an IA can be called just and only plagiarism. Can we talk about learning and inspiration about a computer that NEEDS other people art (always without any kind of consent) and won’t never be able to think about what it is doing?
Absolutely not.

IAs should be illegal if its “”““inspiration””“” is not shown.
If you can’t steal other people art, your “IA” shouldn’t be allowed to do it either.
Moral implications in that stealing are very clear.

There already are people saying “my IA use another people images but since I can’t control it, I don’t want to give those artist anything” and that’s its main use, because they are not even creating anything worth it.
And, if this don’t stop soon, thousands of artists are going to loose their jobs due to a robot that it is precisely using their art, so the cicle of artist creation will stall like the gene variety in royal families.
This don’t provide anything to our artistic world, but can destroy it completely.

Yes, as you can imagine at this rate, I hate IAs as I don’t hate almost anything in this world.

1 Like

My thoughts as well. I believe that in the long run this will probably makes us think on the meaning and scope of what constitutes “inspiration”.

1 Like

Another difference I would state is that evolutionary, at least, biological intelligence never had a purpose but was something that helped survive and transmit genes. That means that it might be just overly complex for what it needs to be. Artificial intelligence, as it is designed with a clear goal in mind, could be in theory more efficient and more optimisable for the same goal.


That’s not true at all. Human comunication is extremely complex.
I am a translator and I’m not worried about automatic translation because a machine can’t possibly translate properly.
Yes, you can say “Hello” and give another “Hello” as an answer, but even in the simplest of the human interaction there is pretty much complexity than it initially appears. And that’s something than a machine will never understand, because a machine has no intelligence and therefore can’t understand anything.

This answer-giving AIs are more like a parrot repeating what have previously heard than an actual intelligence. And it seems like it can “think” a proper answer just because its database is massive.

1 Like

Related to this, a few days ago I watched something that blew my mind. The guys from Linus Tech Tips host a podcast where they talk about tech topics, in one of those videos they tested Bing’s AI (33:25). Their first two questions to the AI were pretty standard, just like Bing’s responses, which looked like just a bunch of information scrapped from the internet.

At that point, the only advantage of Bing’s AI over ChatGPT seemed to be its capability to access the internet, until Linus asked the third question (43:39): “how many ltt backpacks will fit in the trunk of tesla model y?”. The answer to that question is not something a typical AI could pull from its data pool simply because it was so specific that it hadn’t been written somewhere yet. Most AIs, like ChatGPT, would just throw random bs that seems plausible. What Bing’s AI did was astounding:

  • It acknowledged that it didn’t have the answer but would try to answer it as best as it could.
  • Grabbed the backpack dimensions from the LTT Store website.
  • Grabbed the Tesla Model Y dimensions from an article from the internet.
  • Did a basic guess assuming ideal conditions to offer an initial answer but acknowledged it was an unrealistic approach.
  • Did multiple calculations involving different dimensions of the backpack and the trunk to provide a better estimate but acknowledged it was still wrong considering a typical backpack has no perfect geometric shape and properties like flexibility and compressibility have to be taken into account as well.
  • Looked for videos of the Model Y trunk and found one of people arranging suitcases with similar dimensions and capacity to the LTT backpacks.
  • After analyzing that video, it concluded that the Model Y trunk can fit about 5 to 7 suitcases, so accounting for the differences between those suitcases and the LTT backpacks, it concluded that it should fit about 6 to 8 LTT backpacks depending on how they are arranged and packed.

One of the hosts confirmed that the final answer was accurate.

Basically, what artificial intelligences like ChatGPT or Midjourney do is take a prompt submitted by the user, search their data pool for things that match the prompt and output that information in a way that seems reasonably veridic, even if it’s factually wrong. However, what Bing’s AI showed was more like a rudimentary thought process. Not only it was capable to understand context, but also tried to improve its answer based on different conditions, calculations and ultimately taking a real-life example to portray the most accurate scenario and give an actually accurate answer. That’s nuts.


One of the reasons I’m actually afraid of the future is because of how many jobs (even technical ones) consist on giving input data to specialised software, say, boundary conditions, initial cases, borders and such. Determining those is not easy but it is rather a mechanical work which then gives results that have to be interpreted. Both the initial and the final analysis must be a walk in the park for a decent AI in the near future.

Engineers will be relics of the past!

I’m not even completely sure which non manual jobs can’t be fully automated.

There was a SMBC comic saying that what makes us human are the things that “robots” can’t do, so society tried to excel at it to become the best expression of humanism. That quickly devolved into crime, swearing and being drunk.


Yeah, nope, things are not as simple.
A fly brain is more complex than the most complex computer in the world.
We are not robots. Things, such as the language, are extremely complex and won’t ever be correctly computarized.
Not in this century, at least.

1 Like

Because IAs are actually stealing art.
You can watch other people art and use it as inspiration. In the first place, the word “inspiration” can’t be possibly applied for someone with no artistic intention at all.
In the second and most important place, an IA doesn’t enjoy other people art and create its own, it just steal the art as it is programmed to steal and replicate it.

It is absurdly simple to me: since it uses other people art without any consent, it should be illegal.
But, of course, if it would have the consent, they couldn’t steal thousands of images and then create anything.

1 Like

To me, there are a few subjects that get entangled with AI as a general category, such as intellectual property and rights of knowledge, the for-profit system of large-scale publishing, corporate ownership of AI as a tool and as what might come after as opposed to individual usage of AIs, and probably many others.

I might be biased on the academic knowledge system though, where you pay to publish, wish your publications get pirated because you despise editorials, and are always building on top of piles and piles of citations of whatever is already reported, adding just a minimal droplet towards the advance of the state of the art.