TL;DR

For this week, I used the MediaPipe SelfieSegmentation model to create a sketch that:

  1. Recognizes human bodies through a webcam
  2. Fills their shapes with images taken by the James Webb telescope

I geeked out on vanilla JavaScript for this weeks assignment too.

Will this make me more comfortable with JavaScript? Am I just giving myself a hard time? Will AI eliminate the labor market so we can kickback and enjoy the view from the very top of Maslow’s pyramid? Or will our new robo-overlords make us build actual pyramids?

Some of these questions were answered.

Astro Dancer.mov

Final Code

This isn’t the smartest way to code this project, but it’s my code and I like it.

Summary

I found a hack-y solution to separate the mask and video stream so I could display it over a background image. This taught me a lot about JavaScript but would have been 10 times faster with P5 and the end product wouldn’t have been so pixelated and inaccurate. Still happy I did it though.

Director’s Cut

Before diving into writing the code I watched the series on async/ await programming with JavaScript so I have a solid understanding of these once and for all. See notes here. I also went down the rabbit hole of canvas DOM elements, getContext(), and finding weird ways to draw on a canvas. Links to references and guides in the credits.

Access webcam with JS

const video = document.getElementById("videoStream");

const constraints = {
    video: {
        width: 1280, 
        height: 720,
    }
} 

navigator.mediaDevices.getUserMedia(constraints) 
    .then(stream => video.srcObject = stream)        
    .catch(err => console.error(err));              

Setting up the model

Here is the documentation I followed.

function applyMask(){

	const model = bodySegmentation.SupportedModels.MediaPipeSelfieSegmentation;
	const segmenterConfig = {
	  runtime: 'mediapipe', // or 'tfjs'
	  solutionPath: '<https://cdn.jsdelivr.net/npm/@mediapipe/selfie_segmentation>',
	  modelType: 'general'
	}

	const segmenter = await bodySegmentation.createSegmenter(bodySegmentation.SupportedModels.MediaPipeSelfieSegmentation);
	const segmentation = await segmenter.segmentPeople(video);
	    
	const foregroundColor = {r: 0, g: 0, b: 0, a: 0};
	const backgroundColor = {r: 255, g: 255, b: 255, a: 255};
	const backgroundDarkeningMask = await bodySegmentation.toBinaryMask(
	segmentation, foregroundColor, backgroundColor);
    
    const opacity = 1.0;
    const maskBlurAmount = 0;
    const flipHorizontal = false;
    const canvas = document.getElementById('canvas');

    await bodySegmentation.drawMask(
        canvas, video, backgroundDarkeningMask, opacity, maskBlurAmount, flipHorizontal);
}