Article

An Introduction to Mobile Augmented Reality Applications 

Woman testing out augmented reality with a VR headset

At Vervint, we’ve had numerous conversations with clients and potential clients about augmented reality (AR). Augmented reality has become increasingly popular in the last few years and an exciting frontier in the mobile space. According to Statista and the Digital Journal, the market value of AR applications is estimated to reach around $200 billion or more by 2028. Additionally,  71% of consumers prefer to order online from shops that utilize AR if no brick-and-mortar shop is available.  

For this article, we’ve created a (relatively) comprehensive guide to AR, discussing the basics of mobile augmented reality, looking at potential business use cases and evaluating development platforms that best suit your specific AR software needs. 

What Is Augmented Reality?  

Augmented reality integrates digital information with a user’s real-world space using technology to create an enhanced version of reality viewed through an interface — such as a mobile app. That is a high-level definition, but what does all that mean?   

Typically, the precise way augmented reality places items in the user’s 3D space varies from device to device. For mobile devices, the camera on the back of a user’s device can calculate and detect surfaces based on how light interacts with different planes. 

Below, we’ll break down the differences between iOS and Android devices:  

iOS  

On iOS devices, the LiDAR sensors can scan and measure the distance between the camera and objects in its surroundings by pulsing rays of light similar to radio waves but with infrared light instead. This sensor allows incredibly accurate mapping of surroundings and more seamless integration of virtual objects as they are placed in a real-world space, enabling proper lighting and physics values to be attributed. Hence, it reacts to the user’s plane more accurately. All the information gathered from the LiDAR sensors and camera is fed into ARKit, and iOS AR SDK and translated to the model placed in the user’s environment.  

Android  

On Android devices, like iOS devices, the camera is used to detect surfaces. However, Android devices do not possess the LiDAR functionality that iPhone has. Instead, a lot of the heavy lifting happens programmatically. Information about light, distance and position is calculated through the ARCore SDK, which heavily relies on its ability to detect planes and surfaces. Though it is notably less accurate than the measurements gathered through LiDAR, ARCore’s plane detection allows for realistic light and physics values to be added to a model placed in the user’s real-world space.   

Rear view of a male software developer working on desktop computer writing code for mobile applications.

Mobile Development

Learn More

Common Types of AR  

Now that we’ve explained what AR is, we will dive into the several types of AR and how each type functions differently while accomplishing the same general end goal — presenting an object into the user’s 3D space.

Below are the most common types of AR:  

Markerless AR  

Markerless AR does not require input from our real-world environment, meaning no specific surface, color, or shape exists in the user’s space for a 3D model or object to render. Most of the time, however, the developer still accounts for plane detection, so the objects are aware of things like the ground or walls.   

For example, users of the Pokémon GO mobile game can view and interact with Pokémon placed in their environment. And Amazon also uses Markerless AR when rendering a product demo, such as furniture or paint. Their mobile application allows users to move objects within their space while taking the shape of the area into account and reacting accordingly.  

Marker-based AR  

On the other hand, Marker-based AR requires an item, or marker, for the AR object to recognize and track to ensure it is properly placed in the user’s space. Often the scene presented by the tag is related to its marker in some way or another. Marker-based AR is a prevalent type used in art exhibitions or museums where AR objects are associated with an artist’s collection. Marker-based AR is also commonly used for business cards to showcase 3D models in a compact and interactive format.  

Projection AR  

Projection AR does not require a mobile device to show AR objects. Instead, the object is superimposed onto a plane by a projector or other device so that multiple people in the same space can view the item without using individual devices. An example would be projection mapping, which BMW recently used during the launch of their new vehicle to show the different customizable designs. Another example of projection AR is hologram and hologram performers.   

Why Are Organizations Using Mobile Augmented Reality?  

With the growing adoption of e-commerce services, organizations understand the importance of utilizing digital solutions to optimize the selling process of products and services. And when it comes to AR, companies have several opportunities to drive new value. From furniture to fun, consumers and businesses are using AR in their day-to-day lives. Whether it’s developing a new frontier in mobile gaming, creating interactive maps and travel functions, designing room layouts, customizing vehicles, or creating an augmented reality medical room, there are numerous possibilities for businesses to invest in AR experiences.  

For example, augmented reality product configurators allow consumers to easily configure custom designs and digitally position a product model into their real-world space, helping to visualize a product in context before buying it. Through this virtual “try before you buy” experience, consumers can visualize, place, and configure a product in context before making a purchase.  

Game designer building digital 3D models and using multiple computer screens, copy space

10 Ways to Drive New Business With the Unity Platform, a 3D Product Configurator

Read More

Best Mobile AR Development Platforms  

What are the different types of mobile AR development, and which are best suited for your AR needs? Before starting development work, organizations need to decide what type of application they want: native or cross-platform.   

Native development allows users to utilize the best features of platform-specific software. In contrast, cross-platform enables you to build out the application for Android and iOS without creating two different applications that may look or behave differently.  

Once the type of application is selected, it’s time to focus on what development tool to use. For this article, we will focus on the most fleshed-out and accessible tools for creating an application, highlighting the benefits and limitations of each option in the AR space. 

ARKit (Native Swift Development)  

ARKit is the main framework for creating iOS augmented reality applications in Swift (an object-oriented language most closely related to Objective-C). Having released their newest version of ARKit, ARKit 6, as of September 2022, Apple’s AR solution boasts plenty of new features that optimize the development process. This framework uses both Swift and SwiftUI to build out its applications.  

Benefits:   

  • Swift is a safe, open-source, and fast programming language that reduces redundancy in code and makes upkeep an approachable task.   
  • ARKit smoothly integrates with SwiftUI, which utilizes declarative syntax to produce modern and clean designs.   
  • ARKit allows you to create high-resolution AR experiences. Coupled with a powerful LiDAR sensor, it makes incredibly accurate and interactive scenes.  
  • With motion capture, depth API, and scene geometry, ARKit is one of the most robust AR libraries in mobile development today.  
  • Creating a native iOS application reaches about 56% of the US’s mobile market share.  

Limitations:   

  • This framework is iOS-specific, meaning any application created using ARKit, and its related framework, RealityKit, would be restricted to iOS devices.   
  • The exclusivity also extends to the lack of support for cross-platform libraries that tap into the capabilities of ARKit, as Apple makes it incredibly difficult for developers to create SDKs for their frameworks.   
  • Limits which iOS users can access its AR functionality since ARKit is only accessible on iPhone 8 and newer.  
  • Currently only supports Alembic (.abc), Wavefront Object (.obj), Polygon (.ply), and Standard Tessellation Language (.stl), file types. Which, while the most standard formats, can cause problems if models are created through another platform.  
  • Building exclusively in iOS means restricting applications to less than 50% of the global mobile market share.  

ARCore (Native Android Development, Kind of)   

ARCore is a platform developed by Google to build augmented reality experiences released in March of 2018. Technically ARCore is a cross-platform SDK, not just native to Android.  ARCore uses motion tracking and a camera to keep track of points in space to create an understanding of the world and users’ positioning. This understanding allows the user to place objects, make annotations, or use the data to change how your application integrates with the world.   

Benefits:   

  • ARCore has cross-platform capabilities which provide native APIs for several platforms, including Unity, iOS, Unreal Engine, and Web.   
  • This platform uses Kotlin, Java or C as its primary programming languages, allowing users to choose what best suits their needs.   
  • ARCore’s ability to programmatically understand its surroundings without using the advanced sensor is accurate and seamlessly integrated into any scene when it renders.   
  • Ability to calculate the relative position and track user movements allows dynamic changes like light estimation and physics calculations to be quick and clean. This is beneficial when trying to display an object realistically in a user’s space, like pattern overlays or furniture.  
  • Utilizes Google’s Cloud Feature API allowing an application to share scenes between different devices. The same 3D object is rendered and manipulated by multiple users simultaneously, with real-time updates and changes reflecting all interactions.  
  • Because ARCore can be cross-platform you can reach most if not all of the global mobile market.  

Limitations:   

  • Some of the biggest hurdles ARCore encounters are its relatively limited features for scene creation, such as the lack of body tracking, poor depth sensing, and absence of large-scale object handling.   
  • The platform is not supported on several Android devices, only working on Android 8.1 or later.   
  • There are limitations of the languages, Kotlin, Java, and C, as they can cause significant slowness when specifically running an ARCore application. Specifically, Java and C have a high capacity to create bloated code with limited scalability, while using this SDK. While Kotlin, being a Java-based language, removes a lot of the clunkiness of Java, it has a steeper learning curve for AR, and the documentation surrounding AR in all these languages is limited.  

Unity  

Unity is a powerful rendering engine, created and distributed for use in June 2005, typically known for its game development capabilities. As a rendering engine, this powerful cross-platform software provides an interface that gives organizations access to all the essential tools needed for 2D and 3D development in one place.   

Benefits:   

  • Allows for cross-platform mobile development as its AR Foundation currently supports features from ARKit, ARCore, HoloLens, and more.   
  • Projects in Unity are written in C# or C++, object-oriented languages (OOL). The code is typically modular, easy to update, and reusable.   
  • Code is often easier to maintain than other platforms, like ARCore or Xamarin, and can significantly reduce the redundancy that other programs may experience when writing out classes for AR objects.   
  • Supports features of native Android and iOS devices, including device tracking, recasting, and 2D object tracking.   
  • Animations, modifications, and optimizations of 3D models and their components are typically prioritized, creating high-quality renders.  

Limitations:   

  • Unity’s AR Foundation has several limitations when building out Android-compatible applications. Namely issues with rendering as well as compatibility issues with the camera’s positioning.  
  • Any cross-platform application created in Unity is limited to ARCore limitations and restrictions. Apps compiled for iOS cannot take advantage of the advanced features that native ARKit provides.   
  • Unity projects are written in an OOL, like the other platforms, making applications more complex when adding new features. Occasionally resulting in bloated code that can affect scalability as new updates for the AR SDK come out.  

Xamarin  

Xamarin Forms is an open-source Windows UI framework that allows developers to build Android and iOS applications. Xamarin added augmented reality capabilities in October 2018 and continues to be a popular cross-platform mobile app development framework.  

Benefits:   

  • Xamarin projects are typically written in XAML with C# code behind, allowing the two platforms’ applications to share UI, code, tests, and business logic.  
  • Developers can create platform-specific code for deeper customization and more utilization of platform-specific features.  
  • Using C# and .NET eases updates to your code base and reduces redundancy when writing AR-specific code, meaning a single development team could reasonably switch between various platform-specific designs or patterns.  
  • Cross-platform development allows for a ‘write once, use everywhere’ approach that reduces expendability.  
  • Being able to integrate native code gives developers the ability to utilize features that aren’t available for both platforms without making the entire application structure suffer from one platform’s limitations.  
  • Because Xamarin compiles apps into native binary, they can be as performant as native apps.  

Limitations:   

  • While Xamarin is as close to native as possible, it can only achieve about 60% – 95% reusable code between platforms.   
  • There is a limited amount of overlap between the way Android and iOS handle augmented reality and any app created in Xamarin would require a hefty amount of individual platform-specific code.  
  • The need to build out a lot of native code behind can be time-consuming and reduce the optimization of the code base.   
  • Frame rate throttling can occur when more complex models animate due to graphic and UI code sharing.  

Vervint: Your Mobile AR Development Partner  

We’re your partner for AR success. At every step of your journey, we’re here for you. With a focus on building strong partnerships, our dedicated team fosters collaboration, executes projects efficiently, provides ongoing support, and drives insights and growth for your business. Contact us today for all your mobile needs!

About the Author

mm

Arie Williams

Mobile Consultant

Arie Williams is a mobile developer and consultant at Vervint. With an abundance of client experience ranging from e-commerce to IoT development, she prides herself in her hunger for knowledge. When she’s not working on projects, she either spends time working with local after-school programs to develop computing curriculum for young students, or she is out looking for the best food experiences Charlotte, NC has to offer.