Interview: Arvizio’s Revolutionary Mixed Reality Server Platform For Real-Time Visualization

Arvizio offers the industry’s first mixed reality server platform offering real time collaboration and advanced visualization. Lidar, raster, vector and other types of data can all be integrated into the platform. Holographic collaboration allows 3D images to be viewed by many team members each with their own unique viewing perspective and the seamless integration of holographic computing and video communications creates a unique experience for sharing real world visual information and augmented reality data among team members. 3D Visualization World interviewed Jonathan Reeves, CEO at Arvizio to learn more about the newly founded company and the technology innovation it is bringing to market. 

3DVW: How did you become interested in the kind of work Arvizio is involved in? What caused you to become involved in this area?

JR: It is an interesting time in the industry with Augmented and Mixed Reality now finding their way into commercial applications. I have spent the last few years working in cloud computing, with technologies ranging from virtualization and cybersecurity to more recently, artificial intelligence and edge computing. When my colleagues and I first saw a demonstration of the HoloLens, we realized that the 3D visualization world was going to change significantly, and we believed there would be a need in the market for platforms that could allow advanced visualization and collaboration built specifically for mixed reality devices.

The founding team of Arvizio has a broad base of technology expertise including 3D model processing, level of detail processing (LOD), the management of large spatial data models from GIS and LiDAR, as well as IoT and networking so it was natural for us to apply these skills to the emerging area of mixed and augmented reality.

3DVW: Let us start by asking what Arvizio does – what is the main focus of the company?

JR: Arvizio delivers an advanced software platform for augmented and mixed reality experiences utilizing the Microsoft HoloLens and other mixed reality devices. Our software platform brings new levels of efficiency in the workplace with state of the art collaboration and 3D visualization techniques. The platform is designed to serve a variety of industry verticals including AEC (Architecture, Engineering and Construction), healthcare, industrial IoT, enterprise training and general education.

Our goal has been to offer the industry’s first mixed reality server platform for real-time visualization and collaboration, both locally and across locations. It is our view that large 3D models with full detail are a fundamental requirement for many forward looking applications but these are beyond the processing capabilities of most AR/MR headset devices such as the HoloLens. We set out to augment their local processing with a server software platform that can process and serve the data as required. This allows very large models in the realm of many gigabytes of data and millions of polygons, or points, to be displayed on the headset by serving the data as required.

In addition, we provide a real time collaboration capability to share live video and audio feeds from the mixed reality device with remote viewers, such as remote specialists. The remote specialist can participate using a two-way live video feed and view the overlaid holographic models using a laptop, tablet or even a smartphone device.

3DVW: We noticed that the term ‘Mixed Reality Platform’ is featured in the company literature. What does that involve?

JR: There is often a degree of confusion regarding the differences between virtual, augmented and mixed realities. Virtual Reality (VR) is the most familiar of the three and is typically a computer-generated simulation of a 3D environment that can be interacted with in a seemingly real or physical way. The focus of VR tends to be total immersion in the experience.

Augmented Reality (AR) can be thought of is a “Heads-Up-Display” (HUD) of 2D information (text, icons and graphics) overlaid on the world around you. AR typically focuses on the display of information overlays providing guidance and additional information overlays regarding the objects in the field of view.

Mixed Reality (MR) combines concepts from both AR and VR but places 2D & 3D virtual objects in positional space around you. For example, using mixed reality you are able to walk up to virtual objects and interact. We can ‘place’ a large 3D model on a conference table, or in free space, and then walk around the model viewing it from all sides without sacrificing awareness of your physical surroundings. MR allows a powerful combination of real and virtual worlds.

The Arvizio platform provides a complete software solution to deliver mixed reality experiences. Our solution includes software that runs on the MR headset and software that runs on a connected laptop, desktop computer or server to feed content to multiple devices while allowing the operator to control the experience.

An additional aspect of a rich mixed reality experience is sharing live video streams. Arvizio leverages the latest advances in IP video technology to provide bandwidth efficient video calling, similar in concept to Skype, but in a private, secure and fully integrated environment. Through video and audio sharing, subject matter experts can observe live video and mixed reality content remotely and participate in the interactive experience. This is extremely useful in many areas of engineering, science and business as well as education.

3DVW: Spatial data appears to be central in the solutions that Arvizio provides along with the 3D visualization focus. Can you explain how these two come together using an example?

JR: Several types of 3D data are used in our mixed reality products including 3D textured meshes, spatial data such as LiDAR point clouds, 3D volumetric models and also raster data representations. Each has their own unique characteristics in terms of level of detail processing techniques, file formats used and approaches used to render such on the mixed reality headset devices.

Mixed reality devices often use game engine platforms for the development of the apps that operate on the device. Unity 3D is the most commonly used, it is a powerful platform but it does have limitations when dealing with spatial data and real time network communications. We enhance the game engine with native plugins that are optimized for spatial data processing. The combination is powerful allowing cross platform development with superior performance.

Increasingly, data types will be mixed in a single visualization experience. For example, a rendered 3D mesh object may overlay a LiDAR point cloud in a particular scene. With LiDAR becoming widespread in the field of autonomous vehicles and drones, one can imagine a number of scenarios when such visual combinations will be valuable.

Similarly in the AEC (Architecture, Engineering and Construction) field, the ability to overlay virtual objects on the spatial data representation of a building or construction project can bring enhanced visualization capabilities during the conceptual, construction and inspection processes.

A unique spatial data concept associated with mixed reality devices is spatial mapping. HoloLens for example, has multiple sensors that allow the device’s software to construct a spatial model of the room in which the wearer is situated. These device-generated spatial maps are used to aid in the tracking of head movement and offer the ability to anchor a 3D model in a particular location in the room. The virtual object is spatially anchored and appears to remain situated in a given location.

In the future, spatial maps will become valuable data in their own right. For example, spatial maps of many rooms can be stored in a database then, on entering a particular room, the spatial map can be re-loaded along with virtual display panels showing information pertinent to the particular room.

3DVW: Real-time data is particularly important in your solutions. Why is it important and how do your solutions make it ‘useable’ in decision making?

JR: Real-time data can take many forms. For instance, information feeds from industrial machines or external sensors can be processed and used to augment the view in the headset. The real world scene now becomes a living experience, very much like a HUD (heads-up display), of valuable information.

The types of real-time data that can be collected, processed and displayed must be flexible to accommodate a range of scenarios. The Arvizio platform includes a programmable framework to capture and pre-process sensor data, interconnect with external IT systems and pass information securely between systems.

3DVW: We have many readers involved in engineering, 3D geodata and visualization as opposed to 3D in health. While both are important areas, we do not see many companies that work in health and engineering with solutions like the kind you provide – that is unique. What makes your technology so useful for applications across disciplines and industries?

JR: The 3D rendering and visualization technology used for engineering use cases, such as processing CAD models, can also be applied to a number of areas in the medical field. In radiology for example, the primary data formats are known as DICOM. CT and MRI scans process and store data in the form of sequential 2D slices. A scan may consist of several hundred slices which are usually viewed on 2D monitors.

We apply our 3D large model processing technology to construct a 3D model from the many individual slices then automatically push the model to the HoloLens devices for visualization. This provides an entirely different viewing experience and opens up new avenues of training, improved rehearsal of complex surgical procedures and improved interaction with the patient.

In other industries, such as engineering and construction, the ability to view and interact with a virtual 3D model created from CAD drawings is important for many reasons. It serves as an essential, and cost-saving, method to present the finished product (without the need for a physical model), identify potential flaws early in the process, get all involved parties on the same page and, provide a positive end-customer interaction.

The nature of mixed reality tends to expose us to adjacent fields across industries. The common thread however, is the need to visualize complex 3D information.

3DVW: Augmented reality is also provided in our technology. Can you describe how AR is being used with your technology?

JR: Augmented reality (AR) is a widely used component of a mixed reality experience. AR techniques can be used to recognize identification markers and trigger the display of data specific to the object for example, or to display information from the real time data feeds associated with a specific piece of machinery..

Mixed reality devices use SLAM (Simultaneous Localization and Mapping) techniques to track position so position can also be used as triggers for AR displays. Information security is also a critically important aspect of most applications so AR techniques can be used as part of the identification process during multi-factor authentication. In fact, mixed reality can be understood simply as the seamless integration of your augmented reality with your perception of the real world.

3DVW: Complex data and Big Data are areas of interest to many people. How does Arvizio help to tackle these kinds of data volumes?

JR: Complex data and Big Data require different processing techniques depending on the type of data to be processed. In the past, most Big Data processing was batch oriented but IoT and other streaming data use cases have changed the landscape. The Arvizio platform has a real-time processing engine with connectors to many external Big Data systems ranging from Hadoop to Apache Spark and NoSQL databases. We can also extend this interconnect capability to a wide range of external IT systems.

Large scale spatial data has its own unique characteristics and is not suited to traditional data handling methods. Our ASPEN (Advanced Spatial Processing Engine) technology allows us to store complex spatial data in data structures that can be rapidly indexed to access information with minimal latency. Traditional Big Data systems do not have the real-time data mining characteristics required for this class of application; big spatial data is a different animal.

3DVW: What do you consider to be the challenges in terms of 3D and visualization today?

JR: One of the greatest challenges is handling very large 3D models on portable devices such as AR/MR headsets, smartphones and tablets. While the processing power of ‘stand-alone’ devices has increased substantially, there is still a significant gap between the CPU and GPU processing power available in such devices and the power of desktop workstation. Optimizing the data hierarchy and controlling the interaction between the remote devices, local edge computing servers and the cloud requires a well designed data management architecture that minimizes latency and shares the processing burden.

Level of Detail (LOD) processing is a familiar concept but mixed reality devices change the requirements. For example, the level of detail may need to change dynamically as the user approaches an object – the closer to the object, the greater the resolution required. This requires rapid indexing of data and the ability to traverse the LOD database in real time.

There is a fair degree of form fitting required to take the plethora of 3D formats and graphics processing approaches used by vendors often making it difficult to allow seamless interoperability across platforms, but this is an important requirement for the future. Cross platform solutions will be essential.

3DVW: Some readers voice the idea that it is difficult to find suitably trained and educated employees in 3D and visualization. Do you find that is true? What kinds of education lead to positive innovation in your area do you think?

JR: It is true that there are special skills required for 3D processing and visualization and this is outside the curriculum of many computer science education establishments. One aspect that shows encouraging signs is the greater alignment of game technology and general 3D processing. Virtual and Mixed Reality application development have many areas of overlap with 3D games, even the tools are beginning to align.

Although educational institutions have incorporated animation and modeling tools into their curriculum, many students and developers are utilizing open source tools and SDKs to jump into VR/AR/MR development – similar to the early days of app development. As industry demand grows, I believe we will see more emphasis on including many of the traditional gaming development techniques into the computer science degree track. These factors, as well as the industry excitement about artificial reality in general, will likely attract new talent into the field.

3DVW: What can you say to the many people sitting with 3D data or technologies that do not know how to bring them alive, to add value and to visualize?

JR: One of the key elements of Arvizio’s solution is its adaptability to a wide variety of business applications. We are happy to consult with these organizations to demonstrate how we can incorporate their 3D data into fully interactive solutions that can increase productivity, enhance training and provide a rich customer experience. Mixed reality is the medium that will bring their concepts, technologies and data to life through interactive visualization and the ability to share and collaborate in real-time. We will be providing demonstrations of our solutions at the AWE USA 2017, conference in Santa Clara, CA on June 1 – 2. In addition, we welcome discussion and those interested in more information can contact Arvizio directly via our website.

——————————————————————————-

Jonathan Reeves, CEO is a recognized leader and serial entrepreneur experienced in the development of forward looking technology business ventures. Prior to co-founding Arvizio, Jonathan served as Chairman at CloudLink Technologies, (acquired by EMC), founded Mangrove Systems, (acquired by Carrier Access), Sirocco Systems (acquired by Sycamore Networks) and Sahara Networks, (acquired by Cascade communications). Jonathan has served as an Operating Partner with Bessemer Venture Partners and previously served on the Board of Trustees at Quinnipiac University.

Share this post