# GrayCode Scan System Documentation

## Overview
The graycode scan system creates a precise mapping between projector pixels and camera pixels by projecting structured light patterns and analyzing how they appear in camera captures. This enables accurate coordinate transformation for laser pointer detection.

## System Architecture

### Core Components
- **GrayScan** (`lib/GrayScan.js`) - Primary entry point; wraps Correspondence with runtime defaults and manages InputSimulator lifecycle
- **Correspondence** (`lib/Scan/Correspondence.js`) - Manages scan types and processing
- **StripeScan** (`lib/Scan/StripeScan.js`) - Core pattern projection and capture logic
- **CameraRaster** (`lib/Scan/CameraRaster.js`) - Stores camera→projector mapping
- **ProjectorRaster** (`lib/Scan/ProjectorRaster.js`) - Stores projector→camera mapping (reverse of CameraRaster)

### Configuration Parameters

**Correspondence defaults** (`lib/Scan/Correspondence.js:9-26`):
```javascript
defaultConfig = {
  vertFrames: 8,            // Creates 2^8 = 256 vertical stripes
  horizFrames: 7,           // Creates 2^7 = 128 horizontal stripes
  darkStripeColor: "#000",
  lightStripeColor: "#ddd",
  varianceThreshold: 0.4,   // Surface area detection threshold
  dummyScanDir: "img/scantest-sand",  // For offline testing
  pictureDelay: 600         // ms between projection and capture
}
```

**GrayScan overrides** (`lib/GrayScan.js:9-15`):
```javascript
// GrayScan passes these to Correspondence, overriding some defaults:
{
  vertFrames: 8,
  horizFrames: 7,
  varianceThreshold: 0.05,  // More permissive than Correspondence default
  pictureDelay: 120         // Faster captures (passed as constructor param)
}
```

**Output Resolution**: 256×128 projector-space correspondence grid

**Dimension Sync**: GrayScan also calls `inputSimulator.setDimensions(dataWidth, dataHeight)` at construction to ensure the input simulator matches the correspondence grid size.

## Step-by-Step Scanning Process

### 1. Initialization Phase
**Location**: `lib/GrayScan.js:65-83` (flatScan method)
```javascript
// Stop laser input simulation during scan
this.inputSimulator.stop();

// Optimize camera settings for stripe detection
await this.setCameraParametersToBrightMode();
```

### 2. Dimension Detection
**Location**: `lib/Scan/StripeScan.js:48-52`
- Capture initial camera image to determine resolution
- Initialize `CameraRaster` with camera dimensions
- Each camera pixel will store its projector coordinate mapping

### 3. Dummy Frame Capture
**Location**: `lib/Scan/StripeScan.js:168-184`

**Purpose**: Ensure every pixel experiences both black and white across all frames
- **Pattern**: Left half black (#000), right half white (#ddd)
- **Timing**: 300ms projection delay + pictureDelay for capture
- **Function**: Prevents edge cases where pixels are always bright/dark

### 4. Vertical Stripe Sequence (8 Frames)
**Location**: `lib/Scan/StripeScan.js:361-380`

**Gray Code Pattern Progression**:
```
Frame 1: 2 stripes   -> |████████|████████| (2^1 stripes)
Frame 2: 4 stripes   -> |████|████|████|████| (2^2 stripes)
Frame 3: 8 stripes   -> |██|██|██|██|██|██|██|██| (2^3 stripes)
...
Frame 8: 256 stripes -> Individual pixel-width stripes (2^8 stripes)
```

**For Each Frame**:
1. **Project Pattern**: `paintStripes(numStripes, "vertical")`
2. **Stabilization**: 500ms delay for projection/camera sync
3. **Capture**: `grabCameraImage()` after configured pictureDelay
4. **Store**: Add to `vertStripeImages[]` array (order critical)

### 5. Horizontal Stripe Sequence (7 Frames)
Same process as vertical stripes, but patterns run horizontally:
```
Frame 1: 2 horizontal stripes (top/bottom halves)
Frame 2: 4 horizontal stripes
...
Frame 7: 128 horizontal stripes (2^7)
```

### 6. Brightness Analysis Phase
**Location**: `lib/Scan/StripeScan.js:84-102`

**Per-Pixel Analysis Across All Frames**:
```javascript
// For every camera pixel, track brightness statistics
pixelCallback = (pixel, rasterCell) => {
  var value = pixel.r + pixel.g + pixel.b;

  if (value > rasterCell.max) rasterCell.max = value;
  if (value < rasterCell.min) rasterCell.min = value;

  rasterCell.variance = rasterCell.max - rasterCell.min;
}
```

**Dynamic Threshold Calculation**:
```javascript
// Brightness threshold is halfway between pixel's min/max
threshold = rasterCell.min + rasterCell.variance / 2;
```

### 7. Gray Code Decoding
**Location**: `lib/Scan/StripeScan.js:262-316`

**Binary Accumulation Process**:
For each camera pixel, decode projector coordinates from stripe patterns:

```javascript
processFrame(img, mode) {
  var outputProp = mode === "vertical" ? "x" : "y";

  var pixelCallback = (cameraPixel, outputRasterCell) => {
    // Gray code to binary conversion
    var prevBit = outputRasterCell[outputProp] & 0x1;
    var whiteBit = 0, blackBit = 1;

    // If previous LSB = 1, flip the bit values (Gray code property)
    if (prevBit === 1) {
      whiteBit = 1;
      blackBit = 0;
    }

    // Shift left and add new bit based on brightness
    if (this.isBright(cameraPixel, outputRasterCell)) {
      outputRasterCell[outputProp] = (outputRasterCell[outputProp] << 1) | whiteBit;
    } else {
      outputRasterCell[outputProp] = (outputRasterCell[outputProp] << 1) | blackBit;
    }
  };
}
```

**Example Decoding Sequence**:
For pixel seeing vertical pattern [bright, dark, dark, bright, bright, dark, bright, bright]:
```
Frame 1 (2 stripes):   bright -> x = 0
Frame 2 (4 stripes):   dark   -> x = 01
Frame 3 (8 stripes):   dark   -> x = 011
Frame 4 (16 stripes):  bright -> x = 0110
Frame 5 (32 stripes):  bright -> x = 01100
Frame 6 (64 stripes):  dark   -> x = 011001
Frame 7 (128 stripes): bright -> x = 0110010
Frame 8 (256 stripes): bright -> x = 01100100

Final x-coordinate: 100 (decimal)
```

### 8. Surface Area Detection
**Location**: `lib/Scan/CameraRaster.js:49-54`

**Variance-Based Filtering**:
```javascript
disableLowVariancePixels(threshold) {
  const variances = this.data.flat().map(x => x.variance);
  const clusters = ckmeans(variances, 2);  // K-means clustering
  const cutoff = max(clusters[0]);         // Boundary between clusters

  this.data.flat().forEach(c => c.enabled = c.variance > cutoff);
}
```

**Purpose**: Remove pixels outside the active projection area that didn't receive clear stripe patterns.

### 9. Correspondence Mapping Output
**Location**: `lib/Scan/CameraRaster.js:17-38`

**RGB Channel Encoding**:
Each camera pixel's projector coordinates are encoded in RGB:
```javascript
pixelRenderer(rasterCell) {
  return {
    r: (rasterCell.x >> 4) & 0xff,     // Upper 8 bits of x-coordinate
    g: ((rasterCell.x << 4) & 0xf0) |  // Lower 4 bits of x +
       ((rasterCell.y >> 8) & 0x0f),   // Upper 4 bits of y
    b: rasterCell.y & 0xff,            // Lower 8 bits of y-coordinate
    a: rasterCell.enabled ? 255 : 0    // Alpha: enabled/disabled flag
  };
}
```

## Bidirectional Mapping

### CameraRaster (Camera → Projector)
**Structure**: `data[cameraX][cameraY] = {x: projectorX, y: projectorY, ...}`

Used for laser detection: given a bright spot at camera coordinates, look up the corresponding projector/screen coordinates.

### ProjectorRaster (Projector → Camera)
**Location**: `lib/Scan/ProjectorRaster.js`

**Structure**: `data[projectorX][projectorY] = {x: cameraCentroidX, y: cameraCentroidY, count, ...}`

Built automatically from CameraRaster after each scan. Since multiple camera pixels can map to the same projector cell, stores the **centroid** (average) of all camera pixels mapping to each projector cell.

```javascript
// Build reverse mapping from a CameraRaster
const projRaster = ProjectorRaster.fromCameraRaster(cameraRaster);

// Look up camera coordinates for a screen position
const camCoords = projRaster.lookup(screenX, screenY);
// Returns: {x: cameraX, y: cameraY, count: numPixels} or null

// With bilinear interpolation for sub-pixel accuracy
const camCoords = projRaster.lookupInterpolated(screenX, screenY);
```

**Use cases**:
- Determine where in camera image a screen position will appear
- Validate correspondence accuracy
- Debug blind spots (projector cells with no camera coverage)
- `renderCoverageMap(canvas)` - Visualize camera coverage density

### 10. Post-Processing Integration
**Location**: `lib/GrayScan.js:37-52`

```javascript
scanPostProcess(canvas, callback, callbackArg) {
  this.clearCanvas();

  // Draw blind spot visualization on canvas
  this.blindCanvas.draw(canvas, this.cameraProjectorMapping,
                       dataWidth, dataHeight);

  // Configure input simulator with new correspondence mapping
  this.inputSimulator.setCameraRaster(this.cameraProjectorMapping);
  this.inputSimulator.start();

  if (typeof callback === "function") {
    callback(callbackArg);
  }
}
```

## Key Technical Principles

### Gray Code Advantages
- **Single Bit Difference**: Adjacent stripes differ by only 1 bit, robust to noise
- **No Ambiguity**: Edge pixels can't be misclassified due to gradual transitions
- **Error Tolerance**: Minor capture errors affect only local regions

### Structured Light Principle
- **Known Projection**: System knows exactly what pattern was projected
- **Unique Identification**: Each projector location has unique stripe sequence
- **Reverse Mapping**: Camera pixel stripe pattern → projector coordinate

### Performance Characteristics
- **Scan Duration**: ~5-10 seconds with GrayScan defaults (16 frames × ~120ms pictureDelay + processing); longer with Correspondence defaults (~600ms pictureDelay)
- **Resolution**: 256×128 projector-space grid
- **Accuracy**: Sub-pixel precision through interpolation
- **Robustness**: Variance-based surface detection filters noise

## Scan Types

### 1. Flat Scan
**Method**: `flatScan(callback, errorCallback)`
**Purpose**: Baseline correspondence mapping for flat surfaces
**Usage**: Initial calibration and laser pointer interaction

### 2. Calibration Mound Scan
**Method**: `calibrationMoundScan(sx0, sy0, sx1, sy1, callback, errorCallback)`
**Purpose**: Learn height differences using known mound in bounding box
**Usage**: One-time height calibration for 3D scanning

### 3. Mound Scan
**Method**: `moundScan(callback, errorCallback)`
**Purpose**: Generate height maps by comparing to flat baseline
**Usage**: Continuous 3D surface monitoring

## 3D Scanning Extension

### Height Calculation Process
1. **Flat Reference**: Store baseline correspondence from flat surface
2. **Height Learning**: Use calibration mound to establish elevation scale
3. **Difference Analysis**: Compare current scan to flat reference
4. **Height Mapping**: Convert correspondence differences to elevation values

### Mathematical Foundation
- **Epipolar Geometry**: Height differences create predictable coordinate shifts
- **Triangulation**: Camera-projector-surface triangle determines elevation
- **Normalization**: Calibration mound provides absolute height reference

## Integration Points

### Camera Interface Requirements
- **Bright Mode**: Optimized exposure/gain for stripe detection
- **Stable Timing**: Consistent pictureDelay for capture synchronization
- **High Resolution**: Better stripe discrimination with higher camera resolution

### Input Simulation Integration
- **Coordinate Transformation**: Maps laser position through correspondence table
- **Real-time Performance**: Must not block during active scanning
- **Dynamic Updates**: Correspondence updated after each scan

### Visualization System
- **Blind Spot Mapping**: Shows areas where laser detection may fail
- **Debug Rendering**: Optional visualization of correspondence data
- **Progress Feedback**: Real-time scan progress indication

## Error Handling and Robustness

### Common Failure Modes
- **Insufficient Contrast**: Poor lighting or camera settings
- **Motion Blur**: Camera movement during long exposure
- **Projector Alignment**: Stripes not properly focused on surface
- **Timing Issues**: Capture before projection stabilizes

### Mitigation Strategies
- **Variance Filtering**: Automatically removes unreliable pixels
- **Dynamic Thresholding**: Adapts to per-pixel lighting conditions
- **Multiple Delays**: Configurable timing for different hardware
- **Progressive Resolution**: Coarse-to-fine stripe patterns reduce error propagation

This graycode scanning system forms the foundation of AnySurface's ability to transform any physical surface into a precise interactive interface.