what's a well thought out opinion you have on AI that most people interested in AI probably haven't heard?
Ṁ482 / 557
bounty left

Get Ṁ200 play money
Sort by:
+Ṁ50

The Cambrian explosion is a good metaphor for what's about to happen. The first single celled life didn't have any predators, it didn't have to out-compete anything.

But when the oceans filled up with enough life, it went nuts. Things started evolving defenses, because other things were developing into effective predators. It was this great arms race which gave an incentive to mutation in general, and to some of the specific traits and strategies we see today.

We don't know if abiogenesis is common or rare. But if it happened again on earth today, we would never know because some tadpole would eat the evidence in the first generation. Any life on earth today is overpowered compared to the first self-replicating cells.

We've never met anything smarter than us. We're like that first single-celled life. We're not built for competition, because we've never had any. We're smart enough to build defences, I think, but it will take time. Think of yourself as that first amoeba, trying to design armor for every weird creature you're about to encounter, if you want to survive the Cambrian Era.

+Ṁ25

The probability of another AI winter is much higher than people think. Historically, there were 2 major winters that occurred following exceptional hype, not to mention several smaller episodes:

  1. Failure of machine translation and early perceptrons (late 1960's to early 70's)

  2. Slowdown of development of expert systems due to excessive cost (19080's to 90's)

As of today, it appears that we've overcome these issues by leveraging deep neural nets, huge amounts of data, and tons of computational power. However, imo, in order to keep making these exceptional advances (think DALLE, GPT) at our current pace, AI companies will either 1) need access to more and more good training data and/or 2) develop new, novel algorithms and training processes that greatly improve accuracy and generalization once more.

Regarding 1, several researchers are now warning of a potential incoming data shortage. Generally, because AI systems become more powerful the more data they are trained on, a data shortage could lead to a "plateau." This makes me think that 2 will become the key to continued AI acceleration. Sam Altman himself has said that “I think we’re at the end of the era where it’s going to be these giant, giant models... and we’ll make them better in other ways.”

If it took 50 years to get to where deep neural nets are now, I think it's relatively likely that the next big model/algorithm/secret sauce will not come for a while - even with the rapid pace we are currently on.

"The Course of Empire is a series of five paintings created by the English-born American painter Thomas Cole between 1833 and 1836."

Cole's fourth painting in this series is where we are headed. This is not a "maybe" - this is how history works.

If past performance does not indicate future results than I just may try jumping off a 117 story cliff onto a frozen lake to test if gravity still works.

If gravity still works, I'll probably die.

However, if gravity still works I will know as I fall to my death that we haven't even reached Tommy's 4th pretty picture.

That would be enough for me to be confident that the destruction of civilization is more likely than an AI takeover. There may be some overlap between the emergence of AGI and societal collapse but nothing close to where AGI (DON'T GET ME STARTED ON ASI) will cause a problem.

Data centres are only secure in a functioning society. When society falls there will be Mad Macks style hordes that will be ripping the literal and figurative guts out of all data centres, GPU farms, yada yada yada.

The physical structures that house any "AI", "AGI", don't get me started on "ASI".....

will be nothing more than scrap metal heaps for ABI (aphasiac biological intelligence) to grunt and rip apart to get dem gold.

Stacks of GPUs - or whatever quantum bullshit exotropic is trying pull - will be nothing more than the fallen pieces of Colossus of Rhodes. Once magnificent and now nothing but a pile of medal to try and find gold in...

Maybe I'll throw a rock instead of jumping. I feel one trial run to see if gravity still works is pretty AGI myself!

AI generated art (digital art, writing, video editing, etc.) is never going to be preferred to human excellence in that same medium, in a general sense. This is because of the communicative nature of art and AI's inability to replicate human creative excellence.

My knowledge and opinions of art is, like anyone's, a subjective one. But as I am learning my medium of video editing I am becoming aware of amount of attention of detail and intentional choices that needs to be present to appease the subconscious of the audience.

I believe this is because art is method of communication we humans have evolved. By creating art you are taking your thoughts, identity, emotions and even unconscious biases and encoding them into your artwork. Which are all from your personal life, regardless of what medium you choose.

This is where AI fails in creating art, as our method of teaching it does not include giving it a personal life that it can draw these identity aspects from. Instead it can only learn, or even perfect, an art medium by collating the work of other human artists.

The result of this is an uncanny valley when looking at high quality AI art. This may be less of the case if AI art loses obvious visual errors, we will still look at an piece of art and judge it with the subconscious part of our brain that is looking for the human message within. Even if we are unaware of this being the reason, we will think of AI art as more mediocre in comparison.

In summary, AI art cannot reach the levels of human excellence as it does not have the human experience behind it that the audience is, often unconsciously, judging it on. Instead AI art will be limited to mediocrity and excellent human art will remain the generally preferred option.

Humanity acustomed to computers being deterministic might be a challenge for AI development.

Program a computer to calculate the square root of a number and you will get the same result, precise and repetible, no need to have a exact copy of a pattern. Create an algorithm using machine learning for the same task and you will get aproximated results, with error margins increasing for values with no training data, and to get the same result every time you need to run it on the same model, same seed if needed.

Now take that same idea for complex tasks. With AI we have achived great results where enought training data is provided, but where its not we have to place the question if we should accept it to commit mistakes, and even if we can or should provide data to patch this holes in training data.

It might sound obvious but for a world where we use computers to calculate and replicate models investigation, use computers for transactions, and security we have some confidence that a computer is running a program capable to repeat a task many times with no mistakes. Even to the point that using a human calculator feels less reliable. Now with AI this might result shoking for some people.

Placing machines aside, nowdays, we as humans do tasks in a daily basis and make mistakes while doing so, but even if that might be thruth, we also dont have a lot of tolerance to mistake. We have eductaion systems with grades to reduce the mistakes and train to do academic task better, we have police and jail to regulate when people commit mistakes that might affect others.

When AI starts to make same mistakes as humans, to greater scales, i just wonder if we could tolerate it, or if we create something to punish AI that commit mistakes.

What's the baseline requirement for well thought out?

AI, especially ChatGPT 4, is so fond of fusilli pasta as an ingredient that it struggles to consider other staples. I'm not joking. DM me for details.