We talk a lot about AR and VR, and if we're really feeling it, sometimes even MR and XR, but today we’d like to talk about the most underappreciated R of all: QR.
Quick Response (QR) codes were invented in 1994 for use by Toyota in manufacturing, and despite high adoption in Asia and Europe, they have never really caught on in the US.
This lack of US adoption could be partly because QR codes weren’t super useful until the last few years. The tipping point for QR codes in the US came in late 2017 when Apple and Google added QR code scanning as a standard feature of their camera apps.
Before then, you needed to download a dedicated QR code reader app, and they were kind of a pain to deal with. (PSA: if you have an old QR code reader app, you might want to delete it, it’s unnecessary at best, and sharing your data at worst)
These days you just open your phone’s camera, aim it at a QR code, and away you go. Here’s one to try:
In fact, in the socially-distanced world, QR codes are having a bit of a moment, offering a touch-free way to access menus and other valuable everyday information.
But how does QR relate to the other Rs like AR?
The dream of AR is: you see something, want to know more, and use your device (phone, glasses, contacts) to bring it to life.
We’ve all seen those examples. Our team at Balti Virtual has created a lot of them: a cereal box transforms into a video game, a magazine page becomes a shopping experience, a bar coaster turns into a game, Kawaii Leonard breaks through the 100’ mural. You know, the usual stuff.
There are almost limitless uses for a digital, interactive layer on physical objects.
Hands down, the biggest challenge to doing this successfully in the real world is alerting people that a digital experience is available and helping them access it quickly.
In the slightly distant future, you’ll be able to aim your phone at anything to access its digital layer (see Kevin Kelly’s Mirrorworld), but in the meantime, the options are limited:
Use text/words to instruct users to download an app and scan a product. This approach doesn’t work for two reasons: people don’t really read instructions, and people hate to download apps.
Use symbolic language, like Snapcodes, to instruct users to use a specific app to scan a product. One issue with this approach is that each social media network has its unique type of code, and pretty soon, your ad or packaging starts to look like a Nascar, covered with logos that don’t add meaning to most users. Along these lines, I hope that Apple’s “Project Gobi” isn’t just an iOS-only version of this because that’s just adding another sticker to the Nascar.
Use NFC tags, allowing users to tap their device to the physical object to access the visual layer. This may become a more common approach as native support for NFC reading grows, but right now, this tech is mostly stuck where QR codes were a few years ago, where a separate app is required (see the previous statement regarding apps).
Use a universally recognizable symbol that users scan without an app.
Yes, QR codes aren’t the most beautiful things ever conceived, but they will do the job until Apple and Google can agree on a prettier-but-still-universally-recognizable format to trigger a digital experience.
In the meantime, there’s been some fun work to spruce up this 1994 classic.
コメント