ARUWPController Options in HoloLensARToolKit v0.2

This post is part of documentation of HoloLensARToolKit, version v0.2. The ARUWPController options documentation for v0.1 is here.

ARUWPController

ARUWPController.cs is one of the main scripts used in HoloLensARToolKit. In this post, the options of this script are listed and discussed, along with common usecases.

Each Unity project using HoloLensARToolKit package must and must only have one ARUWPController component.

ARUWPController is very similar to ARController in ARToolKit, one of the major difference is that ARUWPController targets only at Universal Windows Platform, while ARController also handles Android, iOS, standalone and even editor. Therefore, ARUWPController has fewer attributes than ARController in general.

When ARUWPController script is attached to some Unity GameObject, its inspector window looks like this:

Use Camera Param

  • The checkbox to indicate how to initialize the camera parameters
  • If this box is check, then the field Camera Param Filename will appear. If not, then users must manually initialize the camera parameter via setting a byte buffer, by calling ARUWPController.SetCameraParamBuffer()

Camera Param Filename

  • This field specifies the name of camera calibration file, contained in the path Assets/StreamingAssets/.
  • The camera calibration file must be ARToolKit format. It is a binary file, instead of XML or YAML for OpenCV. Please refer to HoloLens Camera Calibration project for more details.

Pattern Detection Mode

This field configures what kind of marker does the detection algorithm look for.

  • If pattern markers only, e.g. Hiro or Kanji, then AR_TEMPLATE_MATCHING_COLOR or AR_TEMPLATE_MATCHING_MONO is enough.
  • If matrix marker only, e.g. 3x3 code marker, then AR_MATRIX_CODE_DETECTION is sufficient.
  • If there is need to detect both kinds of marker, then AR_TEMPLATE_MATCHING_COLOR_AND_MATRIX pr AR_TEMPLATE_MATCHING_MONO_AND_MATRIX must be chosen.

Matrix Code Type

This field will appear if the Pattern Detection Mode involves the detection of matrix marker. There are many types of matrix marker, some of them are supported by ARToolKit and this project. The most common marker set is AR_MATRIX_CODE_3x3. All available options are:

  • AR_MATRIX_CODE_3x3
  • AR_MATRIX_CODE_3x3_PARITY65
  • AR_MATRIX_CODE_3x3_HAMMING63
  • AR_MATRIX_CODE_4x4
  • AR_MATRIX_CODE_4x4_BCH_13_9_3
  • AR_MATRIX_CODE_4x4_BCH_13_5_5

Track FPS Holder (optional)

This field is looking for a Unity.UI.Text object to print out tracking frame rate. It is very useful for debugging, or inspecting the performance of the application. In the sample scenes provided by HoloLensARToolKit, this text field is at the top-right corner for the user. Because it is not a required component for tracking to run, this field can be left blank.

Render FPS Holder (optional)

Similar to Track FPS Holder (optional), this field is optional, and is able to visualize the frame rate of rendering of the application. The rendering here means the refreshing of the whole application, but not the refreshing of video.

Advanced Options

The above options are essential, and should be taken care for each application. If Advanced Options is checked, more options will be listed for users to configure the performance of ARUWPController. The following screenshot shows the full list of options:

Border Size

The percentage of border of the marker, by default, it is 0.25. For example, in a 80cm width marker, the border is 20cm (25%).

Labeling Mode

It configures the color of the border of the marker. AR_LABELING_BLACK_REGION is the default.

Image Proc Mode

The mode of image processing, by default, AR_IMAGE_PROC_FRAME_IMAGE is chosen.

According to ARToolKit documentation,

When the mode is AR_IMAGE_PROC_FIELD_IMAGE, ARToolKit processes pixels in only every second pixel row and column. This is useful both for handling images from interlaced video sources (where alternate lines are assembled from alternate fields and thus have one field time-difference, resulting in a “comb” effect) such as Digital Video cameras.

Thresholding Mode

You can choose different thresholding algorithm from the droplist, same as ARToolKit. Available options are:

  • AR_LABELING_THRESH_MODE_MANUAL
  • AR_LABELING_THRESH_MODE_AUTO_MEDIAN
  • AR_LABELING_THRESH_MODE_AUTO_OTSU
  • AR_LABELING_THRESH_MODE_AUTO_ADAPTIVE
  • AR_LABELING_THRESH_MODE_AUTO_BRACKETING

Threshold

This field appears when the Thresholding Mode is set to AR_LABELING_THRESH_MODE_MANUAL. ARToolKit first thresholds the grayscale image before corner extraction. It is easy to understand that the value applied here will be used as the threshold to obtain black and white image.

Finally

You can access more articles describing the implementation details of HoloLensARToolKit in my blog, simply clicking on the tag: hololens-artoolkit.

Thanks for reading!


IEEEVR 2017 Highlight

The IEEE Virtual Reality 2017 conference was held in Los Angeles from March 18th to 22nd. I was lucky to attend the exciting conference. Our team from Johns Hopkins University have three poster presentations. Here in this post, I would love to share some highlights and interesting projects that I saw during the conference.

Virtual Reality is such a popular terminology these years, with more and more revolutionary products, cool demos, and potential application domains. IEEEVR is undoubtedly the top conference in this technology domain. The papers, posters and demos at IEEEVR represent the most recent breakthroughs in the research of VR. Some of them are selected and briefly described in this post. You can definitely refer to the full proceedings of IEEEVR 2017 when they are available.

HMD v.s. CAVE

SEARIS (Software Engineering and Architectures for Realtime Interactive Systems) is one of the workshops held at IEEEVR on Sunday. A panel “Big metal VR in the HMD and Unity era” attracted the attention of many researchers, including me. The big metal VR here means CAVE.

A cave automatic virtual environment (better known by the recursive acronym CAVE) is an immersive virtual reality environment where projectors are directed to between three and six of the walls of a room-sized cube.

CAVE was one of the major representatives of Virtual Reality technology for a long time. However, as indicated in the name of the panel, VR entered the “HMD and Unity era”, pretty much since the appearance of good HMDs, for example, Oculus. VR and AR work based on projectors are becoming rare. Apparently, CAVE technology is experiencing a hard time. Researchers attending this panel had very heated discussion about the current situation and future of CAVE.

Several years ago, head-mounted display was not as mature as today. People thought HMDs were too bulky, expensive, and ergonomically uncomfortable for VR experiences. CAVE was absolutely the choice for VR applications. It has been widely used in applications like military training. However, engineers tackled the problems quickly, and HMDs bring VR out of the labs, to the general consumer market. HMDs are cheap, easily accessible, and powerful. On the other hand, CAVE is still difficult to setup. Careful calibration is involved to guarantee comfortable experience for the users. More spaces are needed. It is much more expensive, compared to HMDs.

Still, CAVE has its unique advantage over HMDs. It provides natural environment for shared VR experience. A well-calibrated CAVE provides better immersive feelings. If you have tried some Disneyland attractions, for example, Soaring Over the Horizon, you will know that CAVE is still great.

Just like very few people could predit the current popularity of HMDs several years ago, advancement in CAVE technology is able to bring people’s faith back.

Interesting Papers

  • Efficient Hybrid Image Warping for High Frame-Rate Stereoscopic Rendering @Paper
  • Paint with Me: Stimulating Creativity and Empathy While Painting with a Painter in Virtual Reality @Paper @Page
  • Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display
  • Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors @Paper @Page
    • Best Paper Award

(This selection is based on my personal interest.)

Interesting Posters

  • Monocular Focus Estimation Method for a Freely-Orienting Eye using Purkinje-Sanson Images
  • The AR-Rift 2 Prototype @Page
  • Estimating the Motion-to-Photon Latency in Head Mounted Displays
  • Hand Gesture Controls for Image Categorization in Immersive Virtual Environments
  • Resolution-Defined Projections for Virtual Reality Video Compression
  • A Methodology for Optimized Generation of Virtual Environments Based on Hydroelectric Power Plants
  • Robust Optical See-Through Head-Mounted Display Calibration: Taking Anisotropic Nature of User Interaction Errors into Account
    • Our poseter wons the Honorable Mention for Best Poster

Interesting Research Demos

  • Diminished Hand: A Diminished Reality-Based Work Area Visualization @Video
  • 3DPS: An Auto-calibrated Three-Dimensional Perspective-Corrected Spherical Display @Video
  • mpCubee: Towards a Mobile Perspective Cubic Display using Mobile Phones
  • Towards Ad Hoc Mobile Multi-display Environments on Commodity Mobile Devices @Video

IEEEVR is full of fun and innovative ideas. It is also very good to see more and more Chinese researchers and engineers in this field.

Thanks for reading! LQ