Will it become common to carry separate devices to enable AI listening to self outputs by mid 2026
Standard
8
Ṁ1066
2026
25%
chance

Line: at least 10% of SF tech workers do it any time by end date

Examples: (necklace/2nd phone/glasses) which can listen to inputs+phone

Example behavior: play YouTube and then have the other device analyse or reply

Judgement:

Seems hard to do so but I'll do things like:

  • Survey working researchers/engineers I know

  • Look at conferences I go to (manifest, etc)

  • Note the criteria is low, just 10%. It can't be a single guy in a group of nine.

I won't bet here

My view on how this could happen

1) Assistants are super useful if they have enough media inputs

2) The assistant apps would like to just directly receive 60fps screenshots of the entire output of other apps or the whole phone, so they can add an AI help overlay, to video or audio

3) App creators/RIAA/MPAA really don't want this for legacy media protection reasons. Similarly, youtube wouldn't actually like full copies of all videos or even locally generated summaries were leaked. This is valuable training data. Neither would apps like snapchat.

4) Privacy people at google/apple, and gov't regulators, EU/GDPR people etc don't want this either since it would make manipulation / privacy violations much easier. i.e. an app which does live analysis of emotions/language use skill/class signifiers/attractiveness in an overlay HUD for you, based on its evaluation. This is a scary thing since I think if allowed, there are going to be some super socially uncomfortable apps. image => text apps already have lots of builtin social filters (generally refusing to evaluate age/status/attractiveness/health/social class/race/sexuality/etc) even though people would love it. remember the blowup over the race changing app? It seems to me apps can provide "emperor has no clothes" style evaluation about most socially sensitive issues which we normally automatically ignore.

5) So, Android and Iphone would just opt not to support these API. They may have private company APIs which only they are allowed to use, or which are super invasive/EU GDPR style complex to manage privacy etc and which make apps super hard to develop and greatly restrict use

6) But the raw versions are so useful that users just get a 2nd device to directly grab the media

Get
Ṁ1,000
and
S1.00
Sort by:
bought Ṁ250 NO

... but... you can have your phone listen to its own outputs already, two devices are not needed?

@equinoxhq it isn't actually very well supported right now. Chatgpt voice mode listening will prevent audible from playing for example

@Ernie Ok, but that sounds like a software issue, which is much easier to fix than a hardware issue. Like, it's physically possible for the hardware everyone already has to do this, so if it becomes something people are willing to pay money for, the software change will likely get made. And, I wouldn't want to make my money selling a device which would predictably become unnecessary if OpenAI makes a software update.

@equinoxhq

Updated description with the scenario I have in mind