It can take newborns a couple years to understand object permanence, and then a few more years after that to have a deep understanding of the characteristics and rules that define those objects, like puzzle pieces or coins with differing values.
You can imagine, then, how hard it would be to make an AR app that does anything close to what a toddler can—but based on two rough-and-ready tech demos I recently saw from a startup called Singulos Research, we might be almost there.
The Singulos team appears to have made good progress toward software that can, in a way, make AR seem or act “smarter,” wrapped up in a product it’s calling the Perceptus Platform. There are countless examples of sophisticated AR on phones today, but apps that look for faces, bodies or objects usually aren’t trying to build up an understanding of where those targets are in space or what the finer differences between multiple objects are. Perceptus is an attempt to let developers define entire categories of objects that an app might develop spatial and situational understanding for.