Will AI be able to read minds by 2030?

Using non-invasive technologies (significantly more accessible technology than fMRI), will an AI be able to read the basic gist of what a human is thinking by 2030?

This does not need to be word for word exact predictions, but should be accurate enough to provide useful information.

EDIT: This does not include methods that require extensive training on the individual. Definition of extensive is left to my discretion.

Get Ṁ600 play money
Sort by:

Will it count as yes if the system needs to be trained on the individual?

@Abraxas excellent question, thank you. I'm leaning towards no, but open to suggestion. If yes, I would want to put a quantifiable limit on how much it can be trained on an individual, and I have no idea how to quantify that.

@Tzvi I think 'no' is far more clean but also seems very unlikely. My understanding of articles like 'fMRI successfully reads minds', is that it's based on an individual baselining (e.g., they ask you to think of or say fish, car, blue, etc) and only after that can they interpret your measurements.

@Abraxas I can imagine that with enough training, fMRI techniques could be used to read minds either without individual calibration or with unsupervised learning on the individual (unsure about whether this even qualifies as trained on individual!). Assuming another form of easily accessible technology will be used, the training set could be big enough to read minds on a basic level without training on the individual... I think 'no' is the only way I can make this a resolvable balanced market.