# AnySurface

AnySurface is a collection of ES6 modules that are broadly concerned with interactions with a
projected surface.

`Surface.js` is the framework-like entrypoint into AnySurface. It collects all the underlying
AnySurface modules into one central place. Surface does the following:

- It does a gray code scan and computes a camera-projector correspondence.
- It computes differences between two camera-projector correspondences (3d scan).
- It detects the laser pointer (with the aid of a running laser-server).
- It displays a camera alignment screen, helping to align and focus the camera.
- It infers a good shutter value for the camera, making it resilient to varying brightness
  conditions.
- It draws a laser blind spot map on the scan canvas, letting the user learn where the
  laser pointer is likely to work or not work.

The main inputs to Surface are a camera server URL and a stack of canvases on which Surface does
its work. It is the job of the client (user of surface) to make sure that the right canvases are
displayed at the right time. Surface currently forces these canvases to fill the screen, by setting
their dimensions to `window.innerWidth` and `window.innerHeight`.

## Setup

```js
import { Surface } from "./path/to/AnySurface/Surface";
// Initialize the surface with the camera server URL.
// The laser-server code is assumed to run there.
var surface = new Surface();
// initialize the surface:
surface.init({
  // The canvas on which we draw the stripe scan.
  // At the end of each stripe scan, the laser blindness map
  // is drawn on this canvas.
  scanCanvas: document.getElementById("scan-canvas"),
  // The canvas on which we draw the laser cursor.
  pointerCanvas: document.getElementById("pointer-canvas"),
  // The canvas on which we draw the alignment screen.
  alignCanvas: document.getElementById("align-canvas"),
  // The canvas on which we draw the shutter test.
  shutterCanvas: document.getElementById("shutter-canvas"),
});
```

## Check if the camera server is up

```js
surface.cameraServerIsUp(itsUpCallback, itsDownCallback, waitTime);
```

- `itsUpCallback` gets invoked if the server is up
- `itsDownCallback` gets invoked if the server is down
- `waitTime` is the amount, in milliseconds, to wait for the camera server before giving up.
  When it gives up, `itsDownCallback` is invoked.

## Infer a shutter value

The `shutterCanvas` needs to be visible:

```js
surface.findAndSetShutterValue(successCallback, errorCallback);
```

- `successCallback` is called if the shutter is successfully set
- `errorCallback` is called if no shutter value can be set. This is usually because the camera
  is inaccessible, or no proper brightness can be reached -- because it's totally dark, or
  you forgot to remove the lid from the lens.

## Draw the alignment screen

The `alignCanvas` needs to be visible:

```
  surface.align.start();
```

Stop drawing the alignment screen:

```
  surface.align.stop();
```

It's important to call `stop()`, otherwise the align object will keep doing requests to the
camera server until you call it.

## Do a correspondence scan

The `scanCanvas` needs to be visible:

```js
surface.grayScan.flatScan(successCallback, errorCallback);
```

- `successCallback` is invoked if the scan succeeded. `surface.grayScan.cameraProjectorMapping` will
  be set to a `Scan/CameraRaster` object.
- `errorCallback` is invoked if the scan failed.

## DOM event generation

If the correspondence scan is successful, Surface will automatically:

- start detecting the laser (assuming `laser-server` is running)
- actively draw the laser cursor on `pointerCanvas`
- generate synthetic events in response to laser activity:
  - `mousedown` when the laser turns on
  - `mouseup/click` when the laser turns off
  - `mousemove` when the laser moves

# 3D Scan

The 3D scan computes a _projector-space_ raster (`Scan/DiffRaster`) by differencing two camera-space rasters (`Scan/CameraRaster`).

First, flatten the sand and invoke `surface.grayScan.flatScan` as described above. This will store the first camera-space raster. Next, invoke the one-time step:

```js
calibrationMoundScan(x0, y0, x1, y1, callback, errorCallback);
```

- The first four arguments describe a projector-space bounding box that is _assumed to have a mound in it_. This assumption helps Surface learn the difference between heights and depressions in the scene.
  - `x0`, `y0` -- the northwest corner
  - `x1`, `y1` -- the southeast corner
- `callback` is called upon success, with a `Scan/DiffRaster` object as its argument. Each cell in the raster contains an elevation value for its corresponding point in projector space. It is up to the user to map this low-resolution raster to the full width and height of the projector space.
- `errorCallback` is called in case of failure.

Once `calibrationMoundScan` has been called, it doesn't have to be called again as long Surface is still active in memory. But it can be invoked again at any time, to re-learn the difference between mounds and depressions, in case of errors or inaccuracy.

To produce a new `Scan/DiffRaster` every time the scene changes, call

```js
moundScan(callback, errorCallback);
```

- `callback` is invoked upon success with the resulting `Scan/DiffRaster` object as its argument.
- `errorCallback` is invoked upon failure.

_Note:_ Whenever either of `flatScan`, `calibrationMoundScan`, or `moundScan` is called, `surface.grayScan.cameraProjectorMapping` will be set to the resulting camera-space raster, and the input simulator restarted on the new raster. In other words, pointer detection remains accurate after every scan, automatically.

# Camera Interface

The camera interface is a wrapper around the [image-loader](./ImageLoaders/README.md) and [camera-server](./CameraServer/README.md) modules. It provides a simple way to interact with different kinds of cameras.

### Supported Camera Types

- UVC cameras (Webcams with browser support)
  - Uses the built in browser setting support if available and uses the UVC server as a fallback
- GeniCam cameras through the laser-server
- Axis cameras through their web interface
- Generic ip cameras

[example](./tests/basic.html)

```js
import { getDevicesIncludingGeniCam } from "../lib/ImageLoaders/utils.js";
import { delayPromise } from "../lib/utils.js";
import { CameraInterface } from "../lib/CameraInterface.js";

const devices = await getDevicesIncludingGeniCam();
console.log(devices);

//Get one image from each device
const interfaces = devices.map(async (device) => {
  const face = new CameraInterface();
  await face.detectCamera(device.label);
  const img = await face.imageLoader.getImage();
  document.body.appendChild(img);
  delayPromise(1000);
  return face;
});
```

![Camera Interface](./img/anysurfaceInterfaces.jpg)

# Laser Calibrator

`LaserCalibrator.js` is a custom web component that is the main interface for the AnySurface library. As the name suggests, the focus is managing the settings necessary for quality laser tracking. The app using the web component must create a Surface object and pass a stack of canvases to surface.init(). Then use the surface.detectCamera(cameraName, cameraUrl) function to replace the dummyCameraInterface with the correct ImageLoader and settings interface for the specific camera. This function needs a cameraUrl when running the python laser-server.

[Example]

```js
var surface = new Surface();

surface.init({
  scanCanvas: document.getElementById("scan-canvas"),
  pointerCanvas: document.getElementById("pointer-canvas"),
  alignCanvas: document.getElementById("align-canvas"),
  shutterCanvas: document.getElementById("shutter-canvas"),
});

surface.detectCamera(cameraName, cameraUrl);

let anySurfaceCalibrator = document.getElementById(
  "laser-calibrator-component"
);
anySurfaceCalibrator.surface = surface;
anySurfaceCalibrator.setAttribute("surface-available", true);
```

From the web component, the user can change the laser threshold (value bright enough to be registered as a laser event), dark Shutter/Exposure and dark Gain/Brightness/ISO (dark referring to camera settings for laser tracking mode), camera, camera resolution and laser status (on or off). These settings update values for the specific device in firebase and local storage, and are set to default values from `Projectors.js` when changing cameras/setting up a new camera.

The web component is set up by changing the `surface-available` attribute to true after injecting `surface` onto the web component (see example above). 'surface' in `surface-available` is referring to the surface object created from the Surface class. The web component directly mutates values on this object so they are immediately available in the app consuming the web component. Setting `surface-available` to false stops the animation for the histogram and brightestPoint canvas and sets the video src to null. Depending on the consuming apps resources and use-case, `surface-available` can be set to 'true' when surface has finished initializing (after surface.detectCamera is complete) or every time the web component is shown or added to the DOM.

The 'close' button in the web component interface, automatically saves the new values. There currently is not a way to reset the values if the user changed them. When a new camera or new resolution is selected through the web component, the bright shutter, alignment, and grey code scan will run to update the other relevant parts of the surface object after the user clicks 'close'.
