AR Weather Vane
Bonsai is an augmented reality weather vane, conveying weather data in a visual format. Created by the team at Voyant AR, it was deliberately designed to explore a potential future state of an augmented reality (AR) user experience.
Today’s AR experiences on mobile devices are confined in many ways from being a truly immersive experience.
Field of view is restricted to the rectangular frame of a smartphone or tablet.
User inputs are via touch screen.
At least one hand is required to hold the device while in use which results in user fatigue and discomfort over long periods of time.
Although voice and gaze user inputs are possible with today’s mobile AR technology, to date these have not been utilised widely in applications that are currently available via app stores.
But the future state of AR will look (and feel) very different.
Major established brands (Apple, Samsung, Google) as well as startups (Meta) have either patented or rumoured to be working on lightweight glasses that will deliver AR content to mainstream consumers at (hopefully) an affordable price point. Other companies such as ODG and Epson have had lightweight AR glasses for some time but applications have been predominantly for the corporate sector. In December of last year Magic Leap announced a creator/developer edition of it’s lightweight headset.
Magic Leap, creator edition.
Lightweight AR glasses is an important step change for the delivery of AR content. It will enhance user experience by:
Increasing the user’s field of view for a more natural and immersive experience.
Allowing the user to experience AR for longer periods of time (potentially all day).
Providing additional interaction methods such as hand gesture recognition as well as voice and gaze.
The team at Voyant AR considered these factors and how the evolution of AR hardware may influence the design of AR content and user experience (UX). As a potential use case, we wondered how weather data might be served, in a future where everyone is wearing lightweight AR glasses all the time?
Bonsai: a concept design for the future state of AR UX
Imagine one morning, waking up at home. After a few moments, you reach out and pick up your AR glasses. You put them on and get out of bed. Walking through your living room, you notice a bonsai tree on your coffee table, resting on a stack of books.
Concept design for bonsai AR weather vane.
Bright green foliage and emerging pink blossoms remind you it’s spring. But grey clouds hide the Bonsai’s highest branches and there’s a slight mist of rain, foreshadowing the need for you to pack your umbrella and coat should you venture outside today.
Using your hand to gesture over the tree, a display is triggered, hovering in the air with today’s date and temperature. Alternatively you could ask what the weather is, just as you would with today’s voice activated assistants such as Siri, Alexa or Google Home, but instead of hearing the weather forecast spoken aloud, you could “see” it via 3D weather animations.
This future state design presents a frictionless user interface. There is no app to “open” per se. The bonsai is “always on”. It is always located on the coffee table in your lounge room. Of course, you could choose to relocate it elsewhere; the kitchen bench, bathroom counter or hallway table. Or make it bigger and place it in your courtyard (for a beautiful tree, you never have to water) as per the image below.
Concept design for large-scale bonsai AR weather vane.
This concept design proposes several key features for the future state of AR.
The app is “always on”
You don’t need to turn it on or off each time you want to access content, the way you would with today’s apps. It simply “lives” in your environment and displays or changes depending on contextual-awareness (more on this later). Of course, you could also hide it temporarily when you wanted to view other content.
Aesthetics meet functionality
If an AR app is “always on”, it makes sense that its content should be aesthetically appealing to the user. (Otherwise irritation may trigger deletion.) One might want an animated AR bonsai tree because it’s visually appealing in your home and easier than looking after a real one. But if we extend this notion, AR will also allow designers to ascribe practical function to beautiful objects – both AR and real. A sculpture at your front door reminds you of a colleague’s birthday, a candle holder signals a room’s ambient temperature, or a lamp displays the time.
Frictionless UI
The future state of AR will include content that is so seamlessly integrated with the real world, that it will be difficult to distinguish between what is real and unreal. Thus, the way we interact with AR content should be natural and require little thought or training. Lightweight glasses, wireless headphones/microphones (potentially integrated with glasses), voice and gesture recognition will blend user inputs with AR outputs.
Object permanence
The AR asset is anchored to a specific location in your home. If you left the lounge room, it would still be sitting on your coffee table. Unless of course you chose to take it with you, for example while traveling.
Multiple users and unique instances
The AR asset may live in your home but you choose whether it’s shared and visible to visitors. If the AR asset is a unique instance, one user could alter the asset and other users could see that change. For example, if I “blew” the flowers on the bonsai my visitors would see them sway.
Design and development of “Bonsai” iOS app
To explore the “bonsai weather” concept further, the team at WCMR developed an iOS app prototype with ARKit, Apple’s AR platform.
Our design considerations included the following:
Build a native AR app. Our objective was to adopt an “AR-first” design approach. We didn’t want to port an existing mobile weather app design into AR. For example, placing a 2D screen hovering in the air or mounted against a vertical plane such as a wall or fridge. We felt that this would be a missed opportunity of what is possible with AR/ mixed reality. Why place a flat object when we could place a 3D object in the real world instead?
Simple user interface (UI). We wanted to design a simple AR interface that would allow the user to interact with AR content in an unobtrusive way.
Real time data. The app would pull live location-based weather data from OpenWeatherMap and manifest this information as 3D AR animations.
Aesthetics. The bonsai was selected because it is a deciduous tree, allowing time of year to be represented through leaf changes. We also felt that bonsai’s promote a feeling of peace and tranquility, making it suitable for “always on” display.
Prototyping, testing and lessons learned
Initial design
We started with a very minimal design. There was no explicit menu UI. In fact there were no buttons or text at all initially! If we had to introduce any explicit UI, it would only be to facilitate the most essential interactions/ data. Our hypothesis was that users would naturally try to interact with the bonsai.
Mockups for bonsai AR weather vane mobile app.
Initial development work started with testing ARKit, Apple’s AR platform for iOS devices with an A9 chip or higher, running iOS11 or later. This was our first foray with ARKit and we were pleasantly surprised by how robust it was. Sure, it didn’t work well on plain/untextured surfaces but we knew those were limitations from other developer’s reported experience. (For a more detailed account of ARKit’s technical features, Matt Miesniek’s article is a great reference.) In these situations, we found that the bonsai would hover, float away, jump from spot to spot or scale unreliably (too big or too small). But once a horizontal plane was detected, it was pretty good at tracking from there. (At the time of writing Apple had just released ARKit 1.5, a beta version that includes vertical plane detection.)
ARKit was also fairly good at tracking “session permanence” via visual inertial odometry (i.e. using camera feed and motion sensors to estimate change in position over time). So it didn’t lose too much tracking when you pointed away from the target and then back again.
We tested different lighting conditions and the bonsai model worked well and looked great outdoors in full daylight.
Prototype development
The next step was integrating live location-based data from OpenWeatherMap which had a good API. Once the data was feeding through we created two animation states:
Season (summer, autumn, winter, spring) demonstrated through leaf colour, leaf number and blossoms.
Weather (sun, clouds, rain, snow). The weather effects took time because they needed to appear “volumetric” (that is, more 3D). Although the sun is a sphere, the clouds are made from a particle system, basically a scattered group of 2 dimensional planes, which each display an animated sprite sheet. Specific “shaders” (code detailing how an object should be rendered/displayed) contributed to its transparent and volumetric effect. The final effect looks quite convincing and realistic when viewed up close.
When both of these systems were working (pulling live data and animations) we programmed animation states to trigger based on relevant data:
Season: Real time date + real time location tracking via device coordinates + seasonal model (correct hemisphere).
Weather: Real time date +real time location tracking via device coordinates + BOM data
User testing
We started user testing our minimal UI approach. This produced interesting results. Users loved the AR bonsai and animated weather effects. However, they didn’t realise that the bonsai was animating live weather data for their current location. So we introduced a location plaque in front of the pot so the user could recognise their current location (see image below).
Bonsai digital prototype with location plaque added.
Our design included a forecast panel which displayed the current date and maximum temperature. To trigger the display, we had a single interactive element: the bonsai itself. The user had to tap the bonsai (on the device screen) to trigger the display of the forecast panel. Unfortunately, users did not realise the bonsai was interactive and didn’t tap the bonsai (unless by accident).
The issue was designing an appropriate “cue” to signal to the user that there was an interactive element that would trigger an action. (For example, websites have buttons with specific calls to action like “learn more”.) We tried adding a small white sphere that hovered in the front of the bonsai to act as a 3D button (see Figure 5). But users didn’t notice the sphere or didn’t realise that they could interact with it. In this prototype iteration we also introduced shadows for the bonsai, pot and plaque. These looked great especially when viewed in full daylight.
Prototype with the all too subtle white button. But the shadows were looking good.
In the end, we fell back to a more familiar UI from mobile design and placed a “Forecast” button on the location plaque (see Figure 6). It was a concession to our minimal UI approach but given that users were using a touchscreen for inputs, it was perhaps too ambitious given that there weren’t any other cues for interaction (i.e. traditional sticky UI placed at the bottom or side of the screen). We did resist adding a “back button” to the forecast panel and were rewarded by the fact that users naturally tapped the panel to hide it.
Prototype with button to trigger forecast panel. Top: default state. Bottom: triggered state.
Rather than have the bonsai appear immediately once the app started, we gave the user control over when the experience started and where the bonsai would be placed via a bonsai icon in the bottom left corner of the screen.
For our own convenience, we also introduced an admin panel (accessed via a cog wheel icon in bottom right hand corner) to control the weather settings manually. It never snows in Fremantle so that was the only way we were ever going to see the animation!!
The final experience can be seen in the video below.
Summary
Apple released ARKit in June 2017 and is a major milestone for the development of mobile AR applications. Our experience developing this prototype was very positive. It’s exciting that Apple has made the leap into AR and as some articles suggest have big plans ahead.
Horizontal plane detection is definitely a step in the right direction. Other AR platforms require image tracking or a QR code to spawn an AR object. But plane detection removes this level of friction (the need to print or have a physical marker) and introduces the freedom for an AR object to appear wherever you like.
Our prototype experience also taught us the value of aesthetics in AR. The bonsai model we used was realistic and lifelike rather than abstract or cartoon in style. Users often moved very close to the object to marvel at the detail of the leaves, flowers and gravel in the pot. When viewed outdoors in full sunlight, the AR shadows were very adept at enhancing that sense of realism particularly when there were other real objects in the same view casting similar shadows. The forecast panel label was deliberately reversed from behind, forcing the user to walk around the bonsai, if they wanted to read it. We realised that users want to be fooled into believing that this AR object was real. (Interestingly, when many people view the above video halfway, they don’t realise immediately that the bonsai isn’t real.)
That’s not to say that there isn’t a place for abstract, cartoon or even 2D objects in AR – indeed there is! – but that we have to consider the context for this content. Our vision of a future state of AR, where the virtual blends in seamlessly with the real world, is not based on AR content looking as real as possible but on the user’s experience with that content. Does it behave in a way that I would expect? For example, sitting on a flat horizontal plane or disappearing behind another object which is in the foreground (object occlusion). Does it react appropriately to environment cues in the real world (shadows cast in the right direction)?
There is much to learn about user interface design for AR. Importantly, how to design for AR without simply replicating traditional mobile or web interfaces. No one wants to see a world of 2D buttons hovering in space. But what’s the alternative? 3D buttons? Or a new design lexicon based on the idea that any (or all) AR objects have inherent interaction properties? Perhaps minimal UI in AR will only come of age and provide the most affordance when users are able to interact with AR assets directly using their hands (e.g. brush their hand over or through an object), voice and gaze rather than through the intermediary of a screen. But that doesn’t mean we can’t start imagining and designing now, what the future state of AR might be like.