Code and Design toolbox for p5js
Below are different examples to get you started with understanding how different elements work. They are intended to be bits and pieces for you to experiment with and combine.
More code examples can also be found on the examples page
GUI example/base sketch
This little sketch has been set up to have the best practices for configuring p5js for apps and small installations. It includes:
full screen mode (press f).
ml5 is included
custom fonts from google
a simple GUI implementation is
fixed resolution for viewing on mobile platforms
a simple statemachine for making multiple views
A bit about colors
Programmers have a tendency to pick really ugly colors. Don't be that programmer. Use google color picker to pick colors for your experiments. The color system is based on Red, Green and Blue, so copy the RGB values in the picker.
Coding with colors
Color are Red, Green, Blue from 0 to 255. By mixing those three colors you can get more than 16 million different colors.
If you want the background to be blue then write:
If you want to fill an ellipse with a semitransparent red color then add another parameter for fill like this:
If you want to set the stroke and not have fill then do the following
Basic drawing example. By drawing an ellipse where the mouse is every frame we can draw on the canvas:
The reason this works is that we are not clearing the frame for each draw with background(0);
However, this makes for a very dotted drawing app. To get a continuous line we need to draw a line between the last position of the mouse (pmouseX/Y) and the current position (mouseX/Y) like so:
This app is a good starting point for experimenting things you can do: Change the color of the line, combine it with the change color below, combine it with the pose app further down and add randomness to make more interesting things.
Change the color on mouseover
Use the Google font library
Another programmer mistake is to not be mindful of the fonts they are using. It is common to just use the default font or a random font. A good place to start is to look at googles fonts and see if you can find one that matches your concept. This example uses two fonts loaded directly from googles font library. To use this do the following:
Find the name and type of the font (e.g. Sans or Serif).
Load the font in setup: loadGoogleFont( 'Droid Sans');
Set the font before writing text: textFont('Droid Sans');
Picking the right font for your project is a science in itself read more here.
Use a keyboard
You can use the function keyReleased() to register a keypress
Notice that the function "keyReleased()" is already present in the sketches. This is because it is used to full-screen when you press "f". So you only need to add the if statement within the outer curly brackets.
You can also make an if statement in your draw() function. Then it will do something until you release the key (e.g. moving a ball forward).
Notice that the function "draw()" is already present in the sketches. So you only need to add the if statement within the outer curly brackets to the existing draw function.
Play a sound sample
Playing sound samples is a quick way to make interactive experiences. To do so you need to upload your soundsample (see info to the left). Then you need to load the sample in preload():
To play the song call:
A bit about using images
Visually you have three strategies for making visual illustrations. You can use the code-based drawing tools (ellipse etc.), you can use a photo or you can use a vector illustration. They are easthetically very different and mixing them usually result is a very messy expression. So choose wisely and be mindful how the overall expression comes together.
Get frequencies of sound
This example records sound from the microphone and calculates the frequencies of them. This can be used for many things. To vislusize noise levels or to react when certain frequencies get above a certain level.
The call spectrum will get the value of the first frequency and so forth.
Get audio volume
This example records sound from the microphone and calculates the amplitude of it. This can be used for many things. To vislusize noise levels etc.
This returns a value between 0 and 1.
Bouncing balls example
This is a pretty advanced example, but it illustrates how you can simulate physics. Try to play with the parameters at the top to see how it changes the behaviour:
The reason the balls have light trails is that the background is semi transparent. Based on code from Keith Peters. Multiple-object collision.
This is a simple pong examle. It moves a ball around on the canvas. If the ball is within the area of the square the balls direction is reversed on the axis that collided.
Use A gamepad / joystick
A gamepad is a quite versatile input device that can be used as-is, and also be hacked into other form-factors with the right tools. It gives you a lot of analogue inputs to works with and some Arduino boards can also simulate a joystick.
When a gamepad is present you can get the different parameters with e.g.:
Other names for buttons and sticks:
FACE_1, FACE_2, FACE_3, FACE_4, LEFT_TOP_SHOULDER, RIGHT_TOP_SHOULDER, LEFT_BOTTOM_SHOULDER, RIGHT_BOTTOM_SHOULDER, SELECT_BACK, START_FORWARD, LEFT_STICK, RIGHT_STICK, DPAD_UP, DPAD_DOWN, DPAD_LEFT, DPAD_RIGHT, HOME, LEFT_STICK_X, LEFT_STICK_Y, RIGHT_STICK_X, RIGHT_STICK_Y
Pose tracking for bodily interaction
Uses the camera to make a skeleton of a person and track different parts of the person. This is similar to Kinect tracking, but only uses the webcamera. To get one of the point it tracking use:
The first 0 is the person id and the second is the point on that person skeleton.
Face and mood detection
Artificial intelligence can be used to recognize faces and assess their mood. This can e.g. be used to make an Instagram filter.
The diagram to the right gives you the different points. For example, you can find the nose by using point number 41. You can get the individual positions and draw an ellipse with this code:
You can get how angry a person is with this code:
Object detection library
Object detection libraries have ben taught to recognize object through a hupe library of images of different objects - the cocossd method has the following syntax:
The for loop run through the objects detected in the array detections. For each object, it draws a rectangle around the object and adds a text label.
Be aware that the library takes a while to load.
Speech to text
Speech to text will listen to the audio input and convert it into text. It is not precise, but surprisingly good. You can then try to detect words and use them to do things. Right now it is responding to "kage". The small text is interim results and the large text is the final result.
Text to speech
This example plays a string of text. This way you can make interactions that are based on voice-based language instead of visuals etc. To have the voice say "Husk at spise fisk" you would need to write:
Text to speech & speech to text
This is a combined sketch which allows you to listen to voices and return a string based on keyword matches.
Teach the machine to detect different poses
In this example you record different kinds of poses and through machine learning it will recognize which poses you are in.
Start the program.
Press add pose as many times as you want poses
Go into the physical pose in front of the camera and press the pose that should match.
Do this with all the poses
Then you can get the best match through findBestMatch().id
If you want to keep the learned poses - press download and you will get a file that you can reupload. Set the variable classifierName to the name of the file in the top of the program.
Teach the machine to detect different elements (teachablemachines)
This example uses the teachable machine toturial from google.
Go to the webpage and press get started.
Chose image classification.
Name the two classes. E.g Cup and noCup.
Record a bunch of images with either a cup or no cup.
Ask it to train your classifier.
When done press export model.
Press the upload model button.
Copy paste the sharable link ind to the example sketch for the "imageModelURL" at the top of the sketch.
Run your code.
Their pose classification system is not really working for p5JS at the moment so use the example above for that
Style transfer model
Draw on an image instead of the canvas
When things get more advanced then you often want to draw on an image instead of directly to the canvas. This allows you to send the image to machine learning, save the image and refresh elements on the screen without affecting the drawing you are making
This example grabs an image from the webcam feed and uses machinelearning to detect a hand. The points can then be used to make interactive installations where one is not touching the interface and gesture based experiments.
Text to points example
This little example converts text to points - so you can animate and make effects with text.
Capture an image from the webcamera
This example shows you how to capture an image from the webcamera. This is a good starting point for a photobooth, stopmotion or machine learning. Etc.
Make a noisy circle
Make a noisy circle
Pick a color from a webcamera
This little example picks the color from a webcamera
Free music, video, samples images and clipart to use
BACKED UP CODES OF THE DIFFERENT EXAMPLES ABOVE
#### INPROGRESS ####
Detect gestures and objects
- [ ] 29/06/2021 mediapipe - handtracking
- [ ] 29/06/2021 https://mediapipe.dev/