gvd8
Giacomo is a junior from Northport, NY, studying Electrical and Computer Engineering. On campus, he is the treasurer of the Cornell Mock Trial Association and is also a TA for Digital Logic (ECE 2300). He really enjoys cooking, traveling and playing soccer.
mf568
Michelle is a junior from Plano, TX, studying Electrical and Computer Engineering. She likes to play piano and volunteers at the local SPCA in her free time.
kan57
Kristina is a junior from Fort Lauderdale, FL, studying Electrical and Computer Engineering. On campus she is involved in the Cornell Autonomous Underwater Vehicle Project Team.
rms438
Russell is a Junior from Manhasset, NY studying Electrical and Computer Engineering. He likes to produce music in his free time.
ys449
Jo Song is a junior from East Lansing, MI, studying Electrical and Computer Engineering. Her favorite grocery store is Meijer. In her free time, she volunteers at Cayuga Medical Center.
jfc420
This is our imaginary teammate. He was immensely useful in helping stave off insanity as we spent increasingly long hours in the lab.
Sometimes we like each other
Microcontrollers
Testing example code: This code is copied directly from File >> Examples >> 1.Basics >> Blink. The Arduino board includes a built-in LED light that is wired to pin 13. In the code, setup() initializes pin 13 as an output, and in loop(), the pin is repeatedly turned on (HIGH) for a second and off (LOW) for a second, thereby creating the blink effect.
We were instructed to modify "Blink" code to work on an external LED light. We connected the LED to pin 11 and added a 1k ohm resistor in to prevent burning out the LED, and modified the code accordingly.
This section provides an introduction to the six available analog pins on the Arduino Uno. First, the variable resistance of a potentiometer had to be digitally outputted on the serial monitor provided by the Arduino IDE. Second, the integer resistance values from the potentiometer had to be mapped to the LED utilizing the analogwrite() function. Third, the oscilloscope was used to analyze the PWM signal given off from the Arduino. Figure (2) shows how the potentiometer was powered as well as where a pull down resistor was placed in relevance to the rest of the circuitry. Figure (3) displays how the pull down resistor was used in reference to the LED, and the connection through digital pin 11. This connection was used as a data output from the Arduino Uno to the LED.
Figure (1): The overall connection interface for this section of the lab. This includes the potentiometer, Arduino Uno, pull down resistors, and wires. Figure (2): This is a schematic diagram for the serial monitor hookup. Figure (3): The above diagram goes with the code in part A of the following section:Analog Output
A: The following code was used to map the potentiometer resistance values onto the LED for variable brightness settings. This code was adopted from the previous task of displaying the potentiometer’s resistance values on the Serial Monitor. The analogRead function inputted data from the A0 pin on the potentiometer, and stored the data as integer values into the variable brightness. The information stored in the variable brightness was outputted to the LED through pin 11 (see Figure 2) utilizing the analogWrite function.
In addition, the integer resistance values were outputted onto the serial monitor using the Serial.println function. Another part of the code to note was that the Serial monitor was initialized by “Serial.begin(9600).”
B: We analyzed the PWM signal outputted by the Arduino on an oscilloscope. By utilizing the oscilloscope’s functionality such as the trig level and scope, the frequency of the signal was found to be 50 Hz. Video of the PWM signal width (determined by on-off times) changing when turning the knob on the potentiometer:
Another aspect of this lab was to connect and control parallax servos using the arduinos. There was two stages to this process: an initial step of controlling the servo by writing specific values to it, and a second stage of modifying the servo based on the potentiometer.
A: The circuitry setup between the arduino and the parallax servo. The servo is receiving power from the arduino because in this case the noise would not be significant enough to affect the servo. The servo is connected directly to the arduino with the black wire connecting to GND, the red wire connecting to 5V, and the white wire connecting to A3. The pin A3 is selected due to its PWM capability, which is used to control the servo. You can view the video here.
Additional tests run on the servo were running it at 90 and a large set of values between 0 and 180. The signal of A3 was measured with the oscilloscope and the result is depicted in Figure 4. To create this capability, we used the #servo library and the code titled pwm_servo.ino. The frequency of this was 50 Hz, with the minimum duty cycle of 7.5 and a maximum duty cycle of 12.
Figure (4): Above is the PWM measured by oscilloscope when parallax is PWM controlled by hard coded values.B: The second setup of the servo mimicked the first setup, however the PWM control was receiving data from the potentiometer rather than written values within the code. The setup was modified so that the potentiometer would output its values to A0, which would be written to ~3. Pin ~3 would be controlling the servo speed using PWM. The wiring follows figure 4. The code that controls this is pwm_servo.ino. The end result of the varying speeds dependent on the potentiometer is depicted in the following video.
Running Servo from Potentiometer
Figure (5): Connection between the arduino and the servo Figure (6): Image of wiring shown in figure 5.In order to have the arduino powered without a long USB connected to it, we soldered two wires to a USB. We can use this to power the arduino with a power bank held at the bottom of the chassis. See figure (7) below:
Figure (7): Shows the soldered wires for the USB portBelow is an image of our robot after lab 1 with chassis, servos, arduino mounted and power bank on the bottom of the chassis:
It is also important to note that the servos will have to be put in two separate directions (0 and 180) in order to go in the same direction due to their orientation, seen here:
And finally: a running robot (that goes in a straight line)! Yay!
The purpose of this lab was to successfully implement two sensors: one would detect a 660Hz whistle blow, and the other would capture inputs from an IR sensor blinking at 7kHz; both are important components to completing the second milestone of the robot. When successfully integrated onto the robot, the robot would be able to detect the whistle blow to signify the beginning of its maze mapping, and utilize the IR sensor inputs to detect treasures.
We started by adding the Open Music Lab FFT library to our Arduino IDE by putting the directory into the libraries folder of the IDE.
Before doing the FFT analysis on the signal on the Arduino Board, we analyzed the signal by using an oscilloscope. A video that displays the microphone’s output connected directly to the oscilloscope is displayed here .
After analyzing the signal with the oscilloscope, we concluded the signal received was strong enough for FFT analysis without external amplification. Using the built fft_adc_serial code, we were able to see the FFT’s outputs from the default number of bins (FFT_N / 2 = 128). After terminating the program, we copied and pasted a single iteration of values into Excel for 660 Hz, 1320 Hz, and a control group frequency (no sound/room noise). Figure #1 displays what was graphed in Excel from these data points. We found that that the 660Hz peak was at the 4th/5th bin. As we increase the frequency of the tested sound waves, the bin number will also increase. In addition, the other test frequency of 1320Hz occured at about bin 9/10, which is double the bin number of 660Hz. This shows that our FFT analysis is working correctly. When working with the microphone, we used a web application that was recommended by the course staff: here
Figure 1 : Signal Magnitude vs. Bin NumberThe acoustic team used a Microphone connected with an Arduino Board along with FFT analysis programming in order to detect a 660 Hz signal.
An Electret microphone with an attached amplifier was used in this section with the output connected to a pull up resistor with a value of 3 kOhms followed by a polarized capacitor with a value of 1 microFarad. The capacitor then acts as a high pass filter preventing lower frequencies from passing through the circuit by blocking DC current when charging, and letting through AC current. This works because a capacitor has a varying impedances dependent on the frequency, so lower frequencies have a large resistance when trying to pass. The resistor in parallel is used in this case to create a path of lower resistance that the low frequency signals will take. This microphone is a passive sensor device that uses the energy provided by the beating of the membrane to power up an inductor. The amplifier included in the internal microphone circuitry was in the form of a MA4466 chip.
Since the circuitry and amplifier were already integrated into the Electret Capsule microphone circuitry, we just had to connect the microphone’s three pins to the Arduino. The VCC, GND, and OUT pins on the microphone were connected to the +5 volts, GND, and A0 pins respectively on the Arduino. After we connected the microphone, fast fourier transforms were utilized, specifically in the modified fft_adc_serial program, to distinguish a 660 Hz signal from room noise along with 585 and 735 Hz signals.
Based on our previous FFT_anaylsis (see Figure #1), we concluded that bins 4 and 5 represented maximum bin values in a 660 Hz signal. Therefore, we monitored the succession of bins 4 and 5 occurring as maximums in the program. From keeping track of the indices of the FFT maximums, we blinked an LED every time a balance of bins 4 and 5 were received from the FFT analysis.
A demo was performed in which a LED shined only when 660 Hz was detected. A video showing the effects on the LED with 585 Hz vs 660 Hz vs 735 Hz is shown here .
Here is our modified fft_adc_serial (from the examples) code for 660Hz Detection:
for (byte i = 0 ; i < FFT_N/2 ; i++) {
//If the value of this bin number is greater than the current maximum,
//store the value in maximum and the bin number in index.
if (fft_log_out[i] > maximum) {
maximum = fft_log_out[i];
index = i;
}
if (i == 127) { //Checks what the maximum bin number was at the last bin
//(FFT_N/2 - 1)
if (index == 4) { //Increment start1
start1++;
}
if (index == 5) { //Increment start2
start2++;}
if (start1 == 20){ //Too many bin 4's indicate a 585 Hz Signal. Reset
//Start2.
start2 = 0;
}
if (start2 == 20) { //Too many bin 5's indicate a 735 Hz Signal. Reset
//Start1.
start1 = 0;
}
if (start1 > 3 && start2 > 2) //A balance of bin 4's and 5's indicate a
//660 Hz Signal. Shine the LED.
{
digitalWrite(10, HIGH);
delay (1000);
digitalWrite(10, LOW);
}
if (index != 4 && index != 5) { //Resets both incrementers
start_time = 0;
start1 = 0;
start2 = 0;
}
maximum = 0; //resets maximum checking at the end of the loop
index = 0; //resets the index at which a maximum occurs at the end of
//the loop
}
}
Our IR system for light frequency detection consisted of an Arduino with a specialized program (see code below), a LM358 op amp for amplification, and our Phototransistor circuit.
Our op-amp was designed according to Figure #2. By selecting R1 to be a 20K resistor and R2 to be a 10K resistor we were able to achieve a voltage gain of 3x.
Figure 2 : (image courtesy of http://ecetutorials.com/analog-electronics/inverting-and-non-inverting-amplifiers/ Op-amp Pinout from the LM358 documentation: http://www.ti.com/lit/ds/symlink/lm258a.pdf)
In order to test our IR system’s ability to detect the three different treasure frequencies, we connected three different leds to our Arduino. One LED shined when 7kHz was detected, another LED shined when 12 kHz was detected, and a third LED shined when 17 kHz was detected. Only one LED shined at a time and the detection range was about half of a foot.
A demo with the LED configuration described above is shown in the following video:
here.
The light frequency outputted from the treasure was manipulated by hooking up the positive and negative headers below the potentiometer to an oscilloscope. The oscilloscope monitored the frequency and amplitude of the signal as we turned the potentiometer with a screw driver.
Below is our modified fft_adc_serial code for Treasure Signal Detection:
for (byte i = 0 ; i < FFT_N/2 ; i++){
if (fft_log_out[i] > maximum - 5) {
if (i > 5)
//Bin numbers less than five tend to be maximums for
//treasure signals. Therefore we cut them out for easier
//signal detection.
{
maximum = fft_log_out[i];
index = i;
}
}
if (i == 127) { //Checks what the maximum bin number was at the last bin
//(FFT_N/2 - 1)
if (index == 45 || index == 46 || index == 47) { //7K
start1++;
//Shine LED from Digital Pin 8 if the bin numbers for 7K are
//detected for at least 5 iterations.
if (start1 > 5) {
digitalWrite(8, HIGH);
delay (1000);
digitalWrite(8, LOW);
}
}
//If the maximum didn't occur at the above indices, reset the increment
//variable start1 to 0
else {
start1 = 0;
}
if (index == 79 || index == 80 || index == 81) { //12K
start2++;
//Shine LED from Digital Pin 9 if the bin numbers for 12K are
//detected for at least 5 iterations.
if (start2 > 5) {
digitalWrite(9, HIGH);
delay (1000);
digitalWrite(9, LOW);
}
}
//If the maximum didn't occur at the above indices, reset the increment
//variable start2 to 0
else {
start2 = 0;
}
if (index == 113 || index == 114 || index == 115) { //17K
start3++;
//Shine LED from Digital Pin 10 if the bin numbers for 17K are
//detected for at least 5 iterations.
if (start3 > 5) {
digitalWrite(10, HIGH);
delay (1000);
digitalWrite(10, LOW);
}
}
//If the maximum didn't occur at the above indices, reset the increment
//variable start3 to 0
else {
start3 = 0;
}
maximum = 0; //resets maximum checking at the end of the loop
index = 0; //resets the index at which a maximum occurs at the end of
//the loop
}
}
Graphics: Take external inputs to the FPGA and display them on a screen. This is the beginning of our “maze” Acoustics: Take a external input to the FPGA and generate a short ‘tune’ consisting of at least three tones to a speaker via an 8-bit DAC.
(Giacomo, Kristina)
The initial part of the lab that we implemented was using the FPGA to generate a square wave. We selected a frequency of 440Hz for the square wave and connected this output to GPIO pin 0 because it was not previously in use. The following code was used to implement the square wave along with the addition of the counter and CLKDIVIDER_440 to the initial section of the code with parameter declarations. For the wiring, we used a breadboard and connected the GPIO pin to the two data pins on the phone jack socket. Additionally, we soldered the two side pins together to increase ease of use. The sound generated and the setup is shown in the here. The square wave generated is shown in the picture below.
always @ (posedge CLOCK_25) begin
if(counter == 0) begin
counter <= CLKDIVIDER_440 - 1;
square_440 <= ~square_440;
end
else begin
counter <= counter - 1;
square_440 <= square_440;
end
end
The next phase we decided to implement was a single sine wave to generate a better clearer sounding tone. To implement this we needed to use an 8-bit R2R DAC because the output from the FPGA to the speaker is not longer one of two values as with the square wave. We wire the inputs from GPIO pins to pins 1-8 of the DAC and then connected pin 16 of the DAC to the speaker input. This wiring setup is depicted in the following picture.
Then we wrote the following code to implement the sin wave.
reg [7:0] sine[0:255];
reg [10:0] counter1;
initial
Begin
sine[0] <= 8'd100;
sine[1] <= 8'd102;
//remaining sin table values
sine[255] <= 8'd98;
end
assign GPIO_1_D[8] = q[7];
assign GPIO_1_D[10] = q[6];
assign GPIO_1_D[12] = q[5];
assign GPIO_1_D[14] = q[4];
assign GPIO_1_D[16] = q[3];
assign GPIO_1_D[18] = q[2];
assign GPIO_1_D[20] = q[1];
assign GPIO_1_D[22] = q[0];
always @ (posedge CLOCK_25) begin
if (counter1 == 127 ) begin
counter1 <= 0;
q <= sine[ADDR];
if (ADDR == 255)
ADDR <= 0;
else
ADDR <= ADDR + 1;
end
else
counter1 <= counter1 + 1;
end
The code was set up so that a counter would control the selection of the values in a sin table the output that corresponding value to create a sin graph. We used a counter that would increment continuously with the clock frequency of 25MHz, and once the counter reached the value of 127 it would restart. This counter was used to determine when to increment the ADDR, specifically when the counter would reach the value of 127. We selected this value as when it included 0 there are 128 total values in order to produce a audible sine wave through the speakers. The incrementation of the counter was implemented with an if statement, and the incrementation of ADDR used an embedded if statement. To implement the sin table for use we used direct digital synthesis. We decided to create a sin table outside of verilog for convenience. We implemented this in Matlab using the following code then copied and pasted into our project. We chose to create a sin table of one period with 256 plotted values because the number corresponds easily to the values of the 8-bit DAC. To ensure that our table was correct we graphed the values. The 8 separate GPIO pins were set to output corresponding to each input on the DAC. The GPIO pins were selected because they were previously not in use. See the video of the sin wave producing a soundhere.
total = 255;
for t = 0:total
value = round(100*sin((6.283*t)/total)+100);
values(t) = value;
fprintf('sine[%d] <= 8''d%d;\n',t, value)
end
To implement the tri tone, we decided to use three tones given by sin waves with different frequencies to create this. To implement this we used a setup similar to the code for the sin wave and repeated a similar version of the sin code three times. To cycle through these tones we implemented a finite state machine. The transition between each of the states, we wait until one second passes dependent on the clock cycle to move onto the next state. The wiring setup between the FPGA, to the DAC, to the speaker is the same as the setup for the individual sin wave. Our finite state machine is shown below:
always @ (posedge CLOCK_25) begin
next_state = 2'b00;
case(state)
TONE1: if (tone_length == ONE_SEC) begin
next_state = TONE2;
tone_length = 0;
end
else begin
next_state = TONE1;
tone_length = tone_length + 1;
end
TONE2: if (tone_length == ONE_SEC) begin
next_state = TONE3;
tone_length = 0;
end
else begin
next_state = TONE2;
tone_length = tone_length + 1;
end
TONE3: if (tone_length == ONE_SEC) begin
next_state = TONE1;
tone_length = 0;
end
else begin
next_state = TONE3;
tone_length = tone_length + 1;
end
default: next_state = TONE1;
endcase
end
always @ (posedge CLOCK_25) begin
state <= next_state;
end
always @ (posedge CLOCK_25) begin
///// TONE 1 //////
if (state == TONE1) begin
if (counter1 == 127 ) begin
counter1 <= 0;
q <= sine[ADDR];
if (ADDR == 255)
ADDR <= 0;
else
ADDR <= ADDR + 1;
end
else
counter1 <= counter1 + 1;
end
///// TONE 2 ////
if (state == TONE2) begin
if (counter1 == 255 ) begin
counter1 <= 0;
q <= sine[ADDR];
if (ADDR == 255)
ADDR <= 0;
else
ADDR <= ADDR + 1;
end
else
counter1 <= counter1 + 1;
end
///// TONE 3 /////
if (state == TONE3) begin
if (counter1 == 511 ) begin
counter1 <= 0;
q <= sine[ADDR];
if (ADDR == 255)
ADDR <= 0;
else
ADDR <= ADDR + 1;
end
else
counter1 <= counter1 + 1;
end
end
The video of the implemented tri tone waves can be seen here. The video of the tri tone sound can be seen here.
(Russell, Michelle, Joan)
Our team decided to sequentially divide our work into four portions. The first task was to display the logic levels of two input switches on the FPGA board to four LEDs on the FPGA board. Second, the logic levels of two input switches on the FPGA board would be displayed to the computer screen. Third, the code would be modified to save memory space and be able to display a “map” later on in the semester. Last, outputs from the Arduino Uno would be displayed onto the computer screen.
A: Switches to LED lights on the FPGA board
We implemented a finite state machine that would check if the current “gridarray” coordinate matched that of the inputs and would save a 1 to that register accordingly. Next, the machine go to state_0 and increment the “gridarray” coordinates. Here is our code working.
Here is the code for this part:
if (state==1'b1) begin //switch input
//led_counter <= 25'b0;
if( grid_coord_y == highlighted_y && grid_coord_x == highlighted_x) begin
gridarray[grid_coord_x][grid_coord_y] <= 1'b1;
end
else begin
gridarray[grid_coord_x][grid_coord_y] <= 1'b0;
end
state <= 1'b0;
end
if(state==1'b0) begin //increment grid index
if (grid_coord_x == 1'b0 && grid_coord_y == 1'b0) begin
grid_coord_x <= 2'b00;
grid_coord_y <= 2'b01;
end
else if (grid_coord_x == 1'b0 && grid_coord_y == 1'b1) begin
grid_coord_x <= 1'b1;
grid_coord_y <= 1'b0;
end
else if (grid_coord_x == 1'b1 && grid_coord_y == 1'b0) begin
grid_coord_x <= 1'b1;
grid_coord_y <= 1'b1;
end
else begin
grid_coord_x <= 1'b0;
grid_coord_y <= 1'b0;
end
// led_state <= led_state;
//led_counter <= led_counter + 25'b1;
state <= 1'b1;
end
B: Switches on FPGA board to computer screen grid
In the second part, instead of outputting to LED lights, we outputted to four pins on the FPGA board (GPIO_0_D ). These GPIO pins were used in controlling the colored square on the screen. Here is a video demonstrating the changing position of the square based on switch logic levels:
https://www.youtube.com/watch?v=1_f9FdkPpto target="_blank"
In this part of the lab we used an 8-bit DAC which converted the signals from the FPGA board to analog signals between 0 and 1 V. The digital pixel information to be transmitted from the FPGA consisted of 3-bits specifying the red color, 3-bits specifying the green color and 2-bits to specifying the blue color. When all 8 bits are 1’s, the DAC will essentially read these digital signals as three 1 V signals ( and the VGA screen will display white). For the 3-bit colors, the first bit is the more “significant” because it represents a number of higher order ( 22) whereas the last bit represents a number of lower order ( 20). To account for this, we want to choose resistor values which would add more “weight” to the first bit. Essentially, when the most significant bit is 1, we want the output to be 4/7 V. Internal resistance of the VGA display is 50 ohms. The FPGA outputs 3.3V.
Calculate resistance R for most significant bit of 3 bits:
4/7 V= 3.3V * 50/(50+R) ⇒ R= 238.75 Ohms
Similarly, the resistance needed for the second most significant bit is 527.5 Ω and the resistance needed for the least significant bit is 1105 Ω.
For the 2-bit color ( blue), the resistance for the most significant bit is 197.5 Ω and the resistance for the least significant bit is 445 Ω.
It should be noted that resistors were already chosen and soldered into the DAC we used to complete this lab.
C: Saving Memory
In order to make our program more efficient and adaptable for future uses (e.g. more grid spaces with images), we decided to implement a double “for” loop that sequenced over a two dimensional memory array. The two dimensional array was initialized as a register named “gridscreen” storing eight bit values as shown in our merged code. Each element in “gridscreen” contained 8 bits in order to properly store the 8 bit data representations of the pixel colors.
always @ (posedge CLOCK_50) begin
if(gridarray[0][0] == 1'b1) begin
gridscreen[0][0] = 8'b000_111_00; //green
gridscreen[0][1] = 8'b000_000_11; //blue
gridscreen[1][0] = 8'b111_000_00; //red
gridscreen[1][1] = 8'b111_000_11; //purple
end
else if (gridarray[0][1] == 1'b1) begin
gridscreen[0][0] = 8'b111_000_11; //purple
gridscreen[0][1] = 8'b000_111_00; //green
gridscreen[1][0] = 8'b000_000_11; //blue
gridscreen[1][1] = 8'b111_000_00; //red
end
else if (gridarray[1][0] == 1'b1) begin
gridscreen[0][0] = 8'b111_000_00; //red
gridscreen[0][1] = 8'b111_000_11; //purple
gridscreen[1][0] = 8'b000_111_00; //green
gridscreen[1][1] = 8'b000_000_11; //blue
end
else begin
gridscreen[0][0] = 8'b000_000_11; //blue
gridscreen[0][1] = 8'b111_000_00; //red
gridscreen[1][0] = 8'b111_000_11; //purple
gridscreen[1][1] = 8'b000_111_00; //green
end
PIXEL_WIDTH = 10'd16;
PIXEL_HEIGHT = 10'd16;
if ((PIXEL_COORD_X < 2 * PIXEL_WIDTH) && (PIXEL_COORD_Y < 2 * PIXEL_HEIGHT)) begin
for (i = 10'd0; i <= 10'd1; i = i + 10'd1) begin
for (j = 10'd0; j <= 10'd1; j = j + 10'd1) begin
if(((j * PIXEL_WIDTH < PIXEL_COORD_X) && (PIXEL_COORD_X < (j + 10'd1) * PIXEL_WIDTH)) && ((i * PIXEL_HEIGHT < PIXEL_COORD_Y) && (PIXEL_COORD_Y < (i + 10'd1) * PIXEL_HEIGHT))) begin
PIXEL_COLOR = gridscreen[j][i];
end
end
end
end
else begin
PIXEL_COLOR = 8'b000_000_00;
end
end
The four beginning control blocks decide which colors will be stored in each individual grid space. Since the current objective was a 2x2 grid, four different colors were stored in a top left, top right, bottom left, and bottom right grid space. The widths and heights of these grid spaces were defined right before the double for loop.
In order to understand the functionality of the double for loop method that was implemented, several points must be made:
Each index “i” or “j” corresponds to a grid space in the same fashion as the indexes in “gridscreen.” In this way, the register PIXEL_COLOR can be assigned to the color information stored inside “gridscreen” at those indexes. The current index “i” or “j” is multiplied by the height and width of each grid space respectively in the if statement. This is to define the bounds of the space that is being colored in at the current iteration of the for loop. Since the “j” for loop is inside the “i” for loop, the grid spaces are colored in across the screen until the last grid space is reached in the x direction. Then the process repeats, but at a “PIXEL_HEIGHT” lower than the previous row. The qualifying if-else statement that is a scope above the double for loop defines all the pixels in the screen that aren’t being used, and colors them in a default color (“PIXEL_COLOR = 8’b000_000_00 //black”).
A more common approach to this problem consisted of using case statements to define the widths and heights for each grid space, and then assigning each element of the memory array to each grid space. The double for loop method was picked over this implementation for the purposes of more code efficiency and adaptability. The double for loop method was more efficient because only several lines of code were written, when dozens of case statements would have been needed for the case statement method.
Moreover, if a 10x10 grid was drawn, only the code in the control blocks from the beginning of the code would need to be expanded upon. It would not be necessary to add lines of code to the bodies of the for loops. We would only need to change the variables “i”, “j”, “PIXEL_WIDTH”, and “PIXEL_HEIGHT” in most cases.
The adaptability of this code will be helpful when displaying a maze for the final competition. The memory arrays can be easily updated to display image files instead of colors and several control statements can be added inside the body of the double for loop in order to identify and display which grid space the robot is located in real time.
The efficiency of the code will mitigate potential screen latency in the final competition when displaying the robot’s location on the screen. This is because our simplified iterative system consists of a lower amount of data storages and calculations, conserving memory and computational power.
D. Arduino to FPGA to Screen
In the final part of the lab, we had to connect the Arduino Uno to the FPGA board. To do this, we connected two external switches (and used a 1.2k pullout resistor) to the Arduino Uno and connected the Arduino Uno to the FGPA board. Since the robot’s primary controller is the Arduino, the eventual plan is to have the Arduino process the maze, send the data to the FGPA, which will then project it onto the VGA screen. The switches were connected to digital pins of the Arduino board and their signals sent to the FPGA course; our code is shown below:
const int buttonPin1 = 10;
const int buttonPin2 = 11;
const int buttonPin1out = 2;
const int buttonPin2out = 7;
int buttonState1 = 0;
int buttonState2 = 0;
void setup() {
Serial.begin(9600);
pinMode(buttonPin1, INPUT);
pinMode(buttonPin2, INPUT);
pinMode(buttonPin1out, OUTPUT);
pinMode(buttonPin2out, OUTPUT);
}
void loop(){
buttonState1 = digitalRead(buttonPin1);
buttonState2 = digitalRead(buttonPin2);
digitalWrite(buttonPin1out, buttonState1);
digitalWrite(buttonPin2out, buttonState2);
Serial.println(buttonState1);
Serial.println(buttonState2);
}
The Arduino has an output of 5V and the FPGA receives 3.3V signals – thus, a voltage divider was needed to regulate the voltage. We used this setup below and used resistor values of 50 and 100.
We then had to connect the Arduino outputs to the GPIO (31 and 33) pins of the FPGA board. Once completed, we connected the FPGA to the VGA screen and tested the switches. This was our final setup:
Our video is here.
For this lab, we had to implement radio communication between the Arduino and FPGA.
Russell, Giacomo
The example code provided an implementation of RF that would transmit the current time using the millis() function. The time value was sent on the transmitting end through the use of the radio.write() function. This value was received on the other Arduino through the use of the radio.read() function.
We replaced the “got_time” variable that represented the time value in the example code with another unsigned long variable. This unsigned long variable would be used to send over coordinates in the later parts of this lab.
Instead of sending the entire maze wirelessly for each iteration at the end of the Arduino’s loop delay, we decided to only send the current coordinates’ x and y values. We represented the coordinate as a two digit number with the y value first and the x value second in the transmitting Arduino. For example, if we were to transmit the current coordinate (2,3), we would transmit 32. We thought that sending individual tile data would be better than sending the whole maze array as it reduces the number of packets between the transmitting and receiving arduinos for every “move” that our robot makes.
On the receiving end, this two digit number would be converted back to an individual x and value through the use of the following lines:
int x = recieved_x_variable % 10 //The remainder is the least significant digit
int y = recieved_y_variable / 10 //The divisor is the most significant digit
Once individual x and y values were extracted, we converted them into bit values which is explained in the maze communication section. The bit values would determine the digital outputs that would send the information parallely to the FPGA. For example, if x = 2 and y = 3, x would be set to 2’b10 and y would be set to 3’b011. Next, the digital pins corresponding to the most significant bit in the x value and 2 least significant bits in the y values would be set to HIGH, and the rest of the digital pins would be set to LOW.
Michelle, Kristina, Jo
We updated our Verilog code from Lab 3 to display a 4 by 5 grid array instead of a 2 by 2 grid array. This was done by expanding upon the memory locations so that all of the grid’s 20 squares can be accounted for with their respective colors.
The code displayed below shows how we assigned pixel colors to all of gridscreen’s square areas.
if (rdy == 0) begin
gridscreen[0][0] = 8'b111_000_00;
gridscreen[0][1] = 8'b111_000_00;
gridscreen[0][2] = 8'b111_000_00;
gridscreen[0][3] = 8'b111_000_00;
gridscreen[0][4] = 8'b111_000_00;
gridscreen[1][0] = 8'b111_000_00;
gridscreen[1][1] = 8'b111_000_00;
gridscreen[1][2] = 8'b111_000_00;
gridscreen[1][3] = 8'b111_000_00;
gridscreen[1][4] = 8'b111_000_00;
gridscreen[2][0] = 8'b111_000_00;
gridscreen[2][1] = 8'b111_000_00;
gridscreen[2][2] = 8'b111_000_00;
gridscreen[2][3] = 8'b111_000_00;
gridscreen[2][4] = 8'b111_000_00;
gridscreen[3][0] = 8'b111_000_00;
gridscreen[3][1] = 8'b111_000_00;
gridscreen[3][2] = 8'b111_000_00;
gridscreen[3][3] = 8'b111_000_00;
gridscreen[3][4] = 8'b111_000_00;
rdy = 1;
end
The “rdy” bit was used to initialize the grid array. By initializing the grid array, we could avoid flickering blocks on the screen which signified that the Verilog Code was setting two colors at once on one grid square area. The rdy bit was initialized to 0 towards the top of our code so that the if statement would evaluate to true the first time. Once all the grid squares were set in “gridscreen” the rdy bit was set to 1 so that the grid array wouldn’t be initialized again.
We iterated through this memory area with double for loop taken from Lab 3’s implementation. Only the maximum values for variable “i” and “j” were changed in the double for loop as well as the “PIXEL_WIDTH” and “PIXEL_HEIGHT” for larger squares. This part of the implementation was scarcely changed because the iteration can set the square colors in a grid array of any arbitrary size. The code for this iteration is shown below:
PIXEL_WIDTH = 10'd64;
PIXEL_HEIGHT = 10'd64;
if ((PIXEL_COORD_X < 4 * PIXEL_WIDTH) && (PIXEL_COORD_Y < 5 * PIXEL_HEIGHT)) begin
for (i = 10'd0; i <= 10'd4; i = i + 10'd1) begin
for (j = 10'd0; j <= 10'd3; j = j + 10'd1) begin
if(((j * PIXEL_WIDTH < PIXEL_COORD_X) && (PIXEL_COORD_X < (j + 10'd1) * PIXEL_WIDTH)) && ((i * PIXEL_HEIGHT < PIXEL_COORD_Y) && (PIXEL_COORD_Y < (i + 10'd1) * PIXEL_HEIGHT))) begin
PIXEL_COLOR = gridscreen[j][i];
end
end
end
end
else begin
PIXEL_COLOR = 8'b000_000_00;
end
We next had to implement some communication system between the Arduino and the FPGA board. Our first attempt was for an SPI system: we coded the Arduino to send 5-bit dummy robot coordinates via digital pins (2 bits for the x coordinates, 3 bits for the y coordinate), and tested its functionality with the oscilloscope. A picture of its output is shown below for output (1, 1):
Our code for the FPGA is shown below:
always @ (posedge SPI_CLK) begin
if(CS == 0 && rf == 0) begin
for (i = 8'd0; i <= 8'd4; i = i + 8'd1) begin
datain[i] = MOSI;
rf = 1'b1;
end
//parsing
//grid_coord_x = datain[4:3];
//grid_coord_y = datain[2:0];
//grid_coord_x = 4'd2;
// grid_coord_y = 5'd3;
rf = 1'b0;
end
end
We abandoned our attempt at SPI after we ran into problems debugging. Instead, we implemented parallel communication between the Arduino and the FPGA board due to time constraints and our unfamiliarity with SPI protocol. Additionally, the Arduino pins required for the RF module utilized the same pins as that of the SPI, so instead of using a multiplexer or some other form of hardware solution for this, we decided to implement the parallel communication.
always @ (posedge CLOCK_50) begin
x1 = GPIO_0_D[25];
x2 = GPIO_0_D[24];
y1 = GPIO_0_D[27];
y2 = GPIO_0_D[26];
y3 = GPIO_0_D[31];
end
In order to communicate information from the Arduino to the FPGA, we choose to implement a parallel implementation over SPi or I2C. This was by converting the x and y values on the receiving RF Arduino into bit values. The x values had two bits associated with them (the robot can only be at x coordinates from 0 to 3, therefore there are 4 possible options - 00, 01, 10, 11). The y values had three bits associated with them (the robot can only be at y coordinates from 0 to 4, therefore there are 5 possible options - 00, 01, 10, 11, 100). We needed to use voltage dividers (of values of 100 and 50) for each of the bits corresponding to the x. S photo of our setup is below:
.
The robot was simulated to move across the screen back and forth in the x directions. Once a side was reached, the simulated robot would move one space down the screen. Once the final grid space has been reached (bottom right corner of the grid), the simulated robot would reset itself at (0,0)
Here is a video of the serial monitor displaying the new coordinates of the simulated robot that are sent to the fpga to be displayed on the screen:
Based on the new x coordinate and y coordinate of the simulated robot, the memory array for the grid was updated to store the color green into the current grid square. Once a grid square was colored green, it would be set to blue to indicate that the area has already been explored. If the robot was to reach an explored area again, the area would turn green indicating the robot’s position regardless of whether the grid square was explored or not. The following picture displays the grid for the initial position of the simulated robot which is (0,0). The green square displays the starting area of the robot and the red squares display unexplored areas for the simulated robot.
While we did not have the chance to test this section of the code due to the previous issues, we wrote out the following code to so that the previously visited locations would turn blue. To implement this we would save the location to two registers, then this register would be used in the next cycle after the new current location had been updated to change the location to blue. The following is the write out of the code.
gridscreen[lastsquare_x][lastsquare_y] = 8'b000_000_11;
lastsquare_x = grid_coord_x;
lastsquare_y = grid_coord_y;
The actual grid is not updating correctly. The radio is set up to snake through the grid: it would start at (0,0), move right 3 coordinates, move down 1 coordinate, move left 3 coordinates, etc, of which the grid is not updating correctly. A video of our current result is shown below; we have not yet finished debugging the code:
We ran into many bugs when attempting to implement the grid. The initial problem we faced was flickering squares between red and green when trying to display the grid. We fixed this issue by implementing the previously described ready bit so that the initial grid would be displayed every cycle. After this point the grid was displaying properly however there were inconsistent values of “visited” blocks that would show up on the grid. One potential cause we found was that the values on the arduino and FPGA code were left floating so they could have been incrementing intermittently. We fixed this issue by initializing the values both on the arduino and FPGA side. After this there was some consistency in the incorrect values, the first and third rows consistently remained red. Because these rows were the odd rows we determined that the least significant bit was not displaying correctly. To check the overall FPGA logic to ensure that it was properly evaluating this value, we hard coded the value of 1 to the least significant bit. The results of this test was that both of the odd rows started changing to the green color when anticipated. This confirmed that our FPGA logic was correct. To further debug this issue, we tried switching the pins on both the arduino and the FPGA. We then tried probing the line with the oscilloscope and received the correct high voltage signal. This confirmed that FPGA was receiving the signal. Then we tried outputting this value to an LED on the FPGA. The LED however would not light up to the corresponding signal. Another hypothesis that we had about the potential issue is that the sampling rate on the FPGA was too high. We tried switching the 25 clock however this did not fix the issue. Additionally, we changed the delay between the sending of each packet from the arduino to the FPGA in case the FPGA needed more time to read the data. However, this also did not fix our issue.
The updated link above shows the correct simulated movement for the robot based on the information sent from the pair of Arduinos to the FPGA. The main issue that we had was that we were pulling the incorrect voltage drop from our voltage dividers. This meant that the GPIO pins on the FPGA weren’t recieving high enough voltages (about 1.3 volts). This is a problem because the GPIO pins require voltages around 3.3 volts.
Here is the FPGA initialization code:
initial begin
rdy = 0;
x1 = 0;
x2 = 0;
y1 = 0;
y2 = 0;
y3 = 0;
end
The objective of the milestone was to have the robot follow a line of black tape and traverse a grid in the shape of a figure 8.
We connected 5 sensors to the robot: 3 in the front, 2 in the back. We tested each of them to determine a threshold value of 840, for which they would detect white or black. Detecting “white” meant that the analogRead values of the sensors would never exceed 840; detecting “black” meant that the sensors would always read above 840.
The three front sensors were meant to detect whether the robot was on a line. In our code, when the front middle sensor and one of the front side sensors is sensing the black line, the robot is considered on the line. When one of the side sensors and the front middle sensor are off the line, the robot is considered off the line and the robot will readjust accordingly. The two back side sensors are used to detect cross sections. When the front sensors are on a black line and the back side sensors both detect a black line, the robot is considered on a cross section. Our code is shown below:
void move(){
if (analogRead(M) >= threshold ){
leftservo.write(103);
rightservo.write(85);
}
//if leftfront and middle sensor is white and rightfront is black, move right, left wheel faster
else if((analogRead(LF)<=threshold)){
leftservo.write(98);
rightservo.write(94);
}
//if rightfront and middle sensor is white and leftfront is black, move left, right wheel faster
else if(analogRead(RF)<=threshold){
leftservo.write(94);
rightservo.write(89);
}
}
Our first (presentable) test run worked like this. We later decided to slow down the servos so that our robot could line follow more smoothly. Here it is.
We included our line-following code with the figure 8 code to make things easier, and as a result, our figure eight implementation is relatively straight forward: if the back two sensors detected a line, then it meant that the robot was at a cross section, and would subsequently turn right and follow the line (4 times) before turning left and following the line (also 4 times). When turning, we commanded one wheel to stop while allowing the other wheel to keep moving, allowing the robot to turn in the direction of the wheel that had stopped. Below is our code for detecting cross sections:
void move_one(){ //move forward until it's at a cross section
while((analogRead(LB) >= threshold_l &&analogRead(RB) >=threshold_r)!=true){
move();}
leftservo.write(94);
rightservo.write(94);
}
For the turning functionality, we created helper functions turn_left and turn_right. An example is below:
void turn_right(){
leftservo.write(98);
rightservo.write(98);
delay(500);
while(analogRead(M)<=threshold){
leftservo.write(98);
rightservo.write(98);
}
}
And as mentioned previously, our figure_eight function was simply having the robot turn right and keep moving until a cross section before turning right again, and turning left after 4 right turns and repeating the same thing.
void figure_eight(){
move_one();
turn_right();
move_one();
turn_right();
move_one();
turn_right();
move_one();
turn_right();
move_one();
turn_left();
move_one();
turn_left();
move_one();
turn_left();
move_one();
turn_left();
move_one();
}
Here is a video of the robot following a figure eight. We later adjusted the back sensor positions and increased the turning speed. Here is our slightly speedier robot.
One of the objectives of the milestone was to be able to detect and classify between different treasures (frequencies will be 7kHz, 12kHz, and 17kHz). In addition we had to implement wall detection to the robot.
We used the oscilliscope connected to the outputs of the treasure (which is shown in the image below) in order to set the frequency at 7kHz, 12kHz, and 17kHz. We do this to ensure that the values measured by the phototransistor circuit will be as accurate as possible.
This video (as shown in Lab 2) reiterates our IR system’s ability to detect and distinguish tones of 7 kHz, 12 kHz, and 17 kHz: here
In this demonstartion, we show our detection for each frequency with a 3 LED setup. The program on the arduino detects the bins that contain the peak of the FFT. Then we output to a certain pin to light the correct LED. We also have an inverting Op-Amp inorder to get a more accurate bin read on the FFT.
Red LED = 7kHz (bins 46 and 47)
Blue LED = 12 kHz (bin 80)
Green LED = 17kHz (bin 114)
We attached a distance sensor to the front of our robot so that it could detect walls and stop accordingly. We added to our previous move_one function (see Milestone 1) because we wanted our robot to not only detect a wall but also stop at the cross section in front of the wall. It was determined that the values outputted by the distance sensor to the Arduino would start decreasing as it approached the wall. To set this threshold value, we checked the output value over serial for a variety of distances. By sampling the output of the sensor at every 50 ms, we were able to check if the robot was approaching (and a short distance from) the wall at every cross section. We decided to set the robot to stop at the cross sections so that the robot will stop at a position which is easy to renavigate from.
Below is our code:
void move_one(){
//move forward until it's at a cross section
while((analogRead(LB) >= threshold_l &&analogRead(RB) >=threshold_r)!=true){
move();// line following function from milestone 1
past= analogRead(A5); // read and save output
delay(50);
current=analogRead(A5); // read and save output 50ms later
}
// once at intersection, check to see if robot is approaching
if(current+15<past){
// included a "buffer" of 15 so that minor disturbances would not cause robot to stop prematurely/unpredictabily
leftservo.write(94);
rightservo.write(94);
delay(10000);
}
}
The goal of this milestone is to implement algorithm to facilitate maze exploration on a 5x4 grid of this layout:
Figure 1. Maze grid. Each intersection represents a grid location. “x” is where the robot starts and north is the top of the grid.
Ultimately, we want working algorithm that facilitates maze exploration and indication that all that can be explored has been explored- in simulation and in real life.
Our first step was to decide on how to “translate” a maze into code. We followed the advice of team alpha and chose to save information about the maze in two 5x4 matrices. One 5x4 matrix contains data on whether or not each location on the maze has or has not been explored (1 being unexplored and 0 being explored). Each index of the matrix corresponds to the respective coordinate on the real maze grid. The other matrix contains information about the walls in the maze. Each corresponding index of the matrix contains a decimal number (0 to 15). Each decimal number can be converted to a 4-bit binary number, where each bit will represent the presence (or absence) of a wall. A 1 indicates the absence of a wall and a 0 represents the presence of a wall. Please see Figure I for how we specified directions (“north,” “south” etc.) The bits are organized as follows: West East South North Ex. 0011 would mean there is a wall to the west and east of the robot.
Our group chose to implement the simulation in Java. However, we did not know about the simulation code provided to us until we had already implemented depth first search (DFS). Therefore, our algorithm is not compatible with the graphical representation provided to us. However, we are still able to show that our algorithm works!
Here is a video of our code running. Here is what our code printed out:
x location:3
y location:4
...
x location:2
y location:4
...
x location:1
y location:4
...
x location:0
y location:4
...
x location:0
y location:3
...
x location:0
y location:2
...
x location:0
y location:1
...
x location:0
y location:0
...
x location:1
y location:0
...
x location:2
y location:0
...
x location:3
y location:0
...
x location:3
y location:1
...
x location:3
y location:2
...
x location:3
y location:3
...
x location:1
y location:1
...
x location:1
y location:3
...
x location:2
y location:3
...
x location:3
y location:4
...
all searched[[I@7f31245a, [I@6d6f6e28, [I@135fbaa4, [I@45ee12a7, [I@330bedb4, [I@2503dbd3, [I@4b67cf4d, [I@7ea987ac, [I@12a3a380, [I@29453f44, [I@5cad8086, [I@6e0be858, [I@61bbe9ba, [I@610455d6, [I@511d50c0, [I@60e53b93, [I@5e2de80c]
Here is the maze we used (both in matrix form and in real life):
{ { 9, 1, 3, 5 },
{ 8, 6, 13, 12},
{ 12, 11, 6, 12 },
{ 8, 3, 7, 14 },
{ 10, 3, 3, 7 } };
Figure 2. This is how the maze would be set up in real life. Picture is taken from video provided by Team Alpha. As you can see, the locations our algorithm outputs match the grids the robot traverses in Team Alpha’s video.
We chose to create a Arduino object which contains the current location of our robot ( the x and y coordinates) and the direction our robot is facing. The appropriate functions ( ie. setters and getters) were implemented. We are assuming that our robot starts at the right-bottom grid. See Figure 1. The grid corresponds to the index [4][3] on our matrix. It should be noted that we refer to the “x-coordinate” as the column index and the “y-coordinate” as the row index.
We implemented DFS with two linked lists and used our “frontier” list as a stack. The pseudocode is as follows:
LinkedList<Arduino> frontier; // contains grid location that still need to be searched
LinkedList<Arduino nodesSearched; // contains grid locations that have been searched.
while (frontier is not empty){
//Pop from top of frontier
if(there is no wall && adjacent grid is not in frontier && adjacent grid is not in nodesSearched){
//Append adjacent grid locations to top of frontier
}
}
//Print out that all possible nodes have been searched
Our group was tasked with choosing an algorithm to dictate the maze exploration of the robot. We believe that Depth First Search (DFS), would be the best algorithm to use in this case (as we have shown in through our simulation). However, due to time constraints, we were not able to get a DFS algorithm working on our robot. Therefore we tried implementing the wall sensing code with a multiplexer for our multiple analog signals in our robot.
In order to implement depth first search on our robot, we had to implement wall sensing at each of the grid’s intersections to provide wall information to the algorithm. Our code for wall sensing tried to accomplish acquiring correcting readings from the three proximity sensors on the front of our robot (one sensor facing left, one sensor facing right, one sensor facing forward) and appropriate testing of these readings. We choose to implement side facing sensors so that the robot could determine all wall locations without turning to use the front wall sensor. This choice helps us maximize for speed. We approached this in the following four steps (see function wall_locate() in the linked code):
Averaging of values collected from the proximity sensors: At each intersection, we analyzed the incoming data from left, forward, and right proximity sensors. These data values were averaged over 7 iterations so that outlier proximity sensor values did not affect the movement of the robot.
Determining the existence of a wall based on the differences between current and past values: At each intersection, the current wall sensor values are compared to the previous wall sensor values. If the difference between the wall sensor values is higher than 10 then this will be registered as a change in whether a wall was detected or not. As an example if the “current_average” on one sensor at the current intersection has a value 10 lower or higher than the past_average from the past intersection, the sensor will register a change of whether there is a wall in front of it. This change affected the variable wallFront for the front sensor, wallLeft for the left sensor, and wallRight for the right sensor. We negated these boolean variables every time this change was recorded.
Storing the wall information at a specific location: We stored wallFront, wallLeft, and wallRight into a byte variable called currentWallValue for more efficiency in our algorithm and in order to communicate wall information to the DFS() in bit form. Each wall corresponded to a value of 1, 2, or 4 in binary form. For example, if there were walls to the left, right, and forward of the robot this byte variable would be B111.
Testing: We implemented a testing algorithm (not the DFS()) to see if the robot was moving properly according to the current wall information. For example, if there were walls to the front, left, and right of the robot, the robot would have to turn around in order to evade the dead end. The robot’s full turn around was accomplished by calling the function turn_right() twice. Link to Wall Sensing Code.
The overall setup of the code functioned according the following flowchart: Move One is the function which prompted the robot to move one intersection forward. At the intersection, the robot would read the wall values as described in the previous section. Upon reading the wall values and determining the walls in the current location as previously described in step 3, the robot would turn in accordance to the wall locations as described previously in step 4. If the robot determined it should loop forward without turning, the code would loop back to move one. If the robot determined that it had searched all the possible locations, the robot would turn on an led. This last section has not been implemented in our code yet, but we plan to implement this section to display that the robot has explored the entire maze. The code for this section can be found at the same link as the wall sensing code.
We ran out of analog ports for the the sensors, so we decided to implement a mux (model 4051BC) to alternate reading between them – we connected the left wall sensors and the left/right front line sensors of the robot to the mux, and coded the robot to read the sensors as needed. A diagram of our wiring is below:
The total numbers of channels being transmitted to the mux is 4: the left and right IR sensors, and the front left/right line sensors. Pins 10 and 9 were the address bits for the mux, and determined which sensors to set to high and which to low; pin 3 connected the mux to Arduino analog A3, 5-7 to ground, and pin 16 to the Arduino 5V source. Our preliminary code for the mux (2-input only) is shown below:
int totalChannels = 2;
int addressA = 2;
int A = 0; //Address pin A
void setup() {
Serial.begin(9600);
// Prepare address pins for output
pinMode(addressA, OUTPUT);
// Prepare read pin
pinMode(A3, INPUT);
}
void loop() {
//Select each pin and read value
for(int i=0; i<2; i++){
A = bitRead(i,0); //Take first bit from binary value of i channel.
//Write address to mux
digitalWrite(addressA, A);
//Read and print value
Serial.print("Channel ");
Serial.print(i);
Serial.print(" value: ");
Serial.println(analogRead(A3));
}
delay(2000);
}
We did run into some issues once we added the multiplexer. Once we implemented the mux above on our robot, we started having problems for line detection and wall sensing. When we tested the robot in the maze, the line following became choppier than we have seen and the robot would only turn right. It seemed to detect walls and intersections periodically, so we believe that the problem stems from the numerous iterations that the robot is going through. We believe that this is an issue with the software side and think that it would be best to restart the implementation that we had set up. This way we will have less chance of failure, as we would be looking for any and all bugs periodically rather than merging existing code together.
In place of the LinkedList from Java, we will use a library QList from Arduino. Additionally, now that our robot is physically moving to different grids, at each “pop” of the stack, we will check to see if the “pop” is adjacent to the current location of the robot. If not, we will iterate through the nodesSearch until we find an adjacent grid that will take us to the new “popped” location.
Additionally, instead of reading in from a hard coded matrix containing information about the walls, we will be using wall information from the distance sensor readings.
In the future, we plan to implement a faster way for the robot to navigate back to a previous location when it reaches a “dead end.” This may include implementing Dijkstra’s algorithm.
Here. is a video of one of our attempts of getting the robot running. The robot can be seen sensing a wall, stopping because of the wall and turning around.
To improve the consistency of our robot, we worked to fix a majority of our wiring to make it significantly neater. We switched the breadboard to a (through hole) board with soldered on resistors for the pins which connected directly to the arduino and headers for the power and ground. We additionally switched the male to female wiring on all the line sensors from individual wires to a grouped male to female wire in a set of three for the power, ground and sensor output lines. These changes made our robot easier to debug and created less disconnection of wires.
We switched to storing our locations for DFS in an int array instead of using the QList because we were worried about memory. Now, we only have to save an integer array and an additional integer which serves as our “pointer” in the array. Every time we “pop” a location, we decrease the pointer value and every time we add to the “stack,” we write into the array and then increment the pointer.
Here is a video of our maze mapping.
For this milestone, we needed a system that could display the walls and treasure in a maze as the robot found them. We also needed our system to display a “done” signal on the screen, and play a “done” signal on the speaker when the maze was successfully mapped.
To do the treasure detection, we simply have to integrate the code and hardware from lab 2 into the DFS code. We copied the circuit from Lab 2 three times over (1 for each treasure detector implemented - left, front, and right) and connected three LEDs to the remaining digital outputs of the Arduino. We will have to reorganize our current mux to wire the additional treasure detectors, since we ran out of analog pins. Below are our new select signals and their corresponding outputs:
Select Bits | Input Number | Sensor Output |
---|---|---|
000 | Y0 | Left Front Line Sensor |
001 | Y1 | Right Front Line Sensor |
010 | Y2 | Left Wall Sensor |
011 | Y3 | Right Wall Sensor |
100 | Y4 | Front Treasure Detector |
101 | Y5 | Right Treasure Detector |
110 | Y6 | Left Treasure Detector |
111 | Y7 | Microphone |
We plan on connecting the microphone to the empty Y7 slot.
To incorporate the treasure detection in our main code, we will only need minor changes from the previous labs, since our Lab 2 code is already working. This will likely require two functions in our main code: treasure_detect and treasure_display. Treasure_detect will iterate through the analog pins 4-7 to detect if there was a treasure available and output 00 (no treasure), 01 (7 kHz), 10 (12 kHz), and 11 (17 kHz) depending on treasure availability and frequency. Treasure_display will light the appropriate LEDs as dictated by treasure_detect output .The fft bin number will also have to be changed, since it takes up a large amount of Arduino memory and we’re utilizing three treasure detectors instead of just one.
Our DFS code already handles the wall detection and converts the wall sensor data into something which can be sent to the FPGA in 4 bits where each bit represents the presence and absence of a wall. Additionally, at each “move_to” function, we are able to output the location of the robot in five bits.
We improved our Lab 4 FPGA Display by adding walls to each grid space in the 5 x 4 grid array. We took in wall data from the receiving side of the radio transmission on the Arduino through four additional GPIO pins on the FPGA.
We added memory array registers for the walls which were wall1, wall2, wall3, and wall4 for the top wall, bottom wall, left wall, and right wall respectively. This was the beginning and end of our memory array initialization:
wall1[0][0] = 8'b000_000_00;
wall1[0][1] = 8'b000_000_00;
wall1[0][2] = 8'b000_000_00;
…
wall4[3][2] = 8'b000_000_00;
wall4[3][3] = 8'b000_000_00;
wall4[3][4] = 8'b000_000_00;
Based on the binary values received from the four additional GPIO pins, we set the walls at each grid space utilizing the memory array above. We change the color from our screen background (black: 8’b000_000_00) to white (8’b111_111_11). Wall Determination at a Grid Point:
if (val == 1'b1) begin
if (wallFront) begin
wall1[grid_coord_x][grid_coord_y] = 8'b111_111_11;
end
if (wallBottom) begin
wall2[grid_coord_x][grid_coord_y] = 8'b111_111_11;
end
if (wallLeft) begin
wall3[grid_coord_x][grid_coord_y] = 8'b111_111_11;
end
if (wallRight) begin
wall4[grid_coord_x][grid_coord_y] = 8'b111_111_11;
end
end
Once we set the walls at a certain grid space, we displayed the updated representation of the maze by iterating over memory array within the double for loop we implemented in Lab 3:
//Upper Walls
if(((j * PIXEL_WIDTH + 10'd0 < PIXEL_COORD_X) && (PIXEL_COORD_X < (j + 10'd1) * PIXEL_WIDTH - 10'd5)) && ((i * PIXEL_HEIGHT < PIXEL_COORD_Y) && (PIXEL_COORD_Y < (i + 10'd1) * PIXEL_HEIGHT - 10'd5))) begin
PIXEL_COLOR = wall1[j][i];
end
The video here displays how our wall data is displayed on the screen.
Our above implementation does not currently include a “done” message; however, we plan to include this into our final design by adding another connection between the receiving arduino and fpga that will send a high value when the dfs() reaches the “all nodes searched” state which is included in our original dfs() function. The DFS already displays “all nodes searched” on the serial monitor screen when the algorithm is complete.
While we did not have the chance to implement the done signal using FPGA, we have determined our plan for implementation. The done signal will be displayed once the DFS algorithm has finished. To indicate that this being done, the robot’s current location square will turn purple which will signify that it has finished its search. This signal will be sent from the arduino on the robot that is running the DFS.
While we did not have the chance to implement the done sound, we have determined our plan for implementation. We will implement the same setup as used in lab 3, this includes an 8-bit DAC with the digital side wired to an FPGA and the analog side connected to the auxiliary jack. We will use the tri tone signal created in the lab to signify done. Please refer to lab 3 on our website for more information on the setup and code. The FPGA in use will be the same FPGA that is receiving the transmitted information from the robot and is used to display the map. This sound will be triggered by the same signal which will display the done signal.
The goal of this milestone is to implement algorithm to facilitate maze exploration on a 5x4 grid of this layout:
Figure 1. Maze grid. Each intersection represents a grid location. “x” is where the robot starts and north is the top of the grid.
Ultimately, we want working algorithm that facilitates maze exploration and indication that all that can be explored has been explored- in simulation and in real life.
Our first step was to decide on how to “translate” a maze into code. We followed the advice of team alpha and chose to save information about the maze in two 5x4 matrices. One 5x4 matrix contains data on whether or not each location on the maze has or has not been explored (1 being unexplored and 0 being explored). Each index of the matrix corresponds to the respective coordinate on the real maze grid. The other matrix contains information about the walls in the maze. Each corresponding index of the matrix contains a decimal number (0 to 15). Each decimal number can be converted to a 4-bit binary number, where each bit will represent the presence (or absence) of a wall. A 1 indicates the absence of a wall and a 0 represents the presence of a wall. Please see Figure I for how we specified directions (“north,” “south” etc.) The bits are organized as follows: West East South North Ex. 0011 would mean there is a wall to the west and east of the robot.
Our group chose to implement the simulation in Java. However, we did not know about the simulation code provided to us until we had already implemented depth first search (DFS). Therefore, our algorithm is not compatible with the graphical representation provided to us. However, we are still able to show that our algorithm works!
Here is a video of our code running. Here is what our code printed out:
x location:3
y location:4
...
x location:2
y location:4
...
x location:1
y location:4
...
x location:0
y location:4
...
x location:0
y location:3
...
x location:0
y location:2
...
x location:0
y location:1
...
x location:0
y location:0
...
x location:1
y location:0
...
x location:2
y location:0
...
x location:3
y location:0
...
x location:3
y location:1
...
x location:3
y location:2
...
x location:3
y location:3
...
x location:1
y location:1
...
x location:1
y location:3
...
x location:2
y location:3
...
x location:3
y location:4
...
all searched[[I@7f31245a, [I@6d6f6e28, [I@135fbaa4, [I@45ee12a7, [I@330bedb4, [I@2503dbd3, [I@4b67cf4d, [I@7ea987ac, [I@12a3a380, [I@29453f44, [I@5cad8086, [I@6e0be858, [I@61bbe9ba, [I@610455d6, [I@511d50c0, [I@60e53b93, [I@5e2de80c]
Here is the maze we used (both in matrix form and in real life):
{ { 9, 1, 3, 5 },
{ 8, 6, 13, 12},
{ 12, 11, 6, 12 },
{ 8, 3, 7, 14 },
{ 10, 3, 3, 7 } };
Figure 2. This is how the maze would be set up in real life. Picture is taken from video provided by Team Alpha. As you can see, the locations our algorithm outputs match the grids the robot traverses in Team Alpha’s video.
We chose to create a Arduino object which contains the current location of our robot ( the x and y coordinates) and the direction our robot is facing. The appropriate functions ( ie. setters and getters) were implemented. We are assuming that our robot starts at the right-bottom grid. See Figure 1. The grid corresponds to the index [4][3] on our matrix. It should be noted that we refer to the “x-coordinate” as the column index and the “y-coordinate” as the row index.
We implemented DFS with two linked lists and used our “frontier” list as a stack. The pseudocode is as follows:
LinkedList<Arduino> frontier; // contains grid location that still need to be searched
LinkedList<Arduino nodesSearched; // contains grid locations that have been searched.
while (frontier is not empty){
//Pop from top of frontier
if(there is no wall && adjacent grid is not in frontier && adjacent grid is not in nodesSearched){
//Append adjacent grid locations to top of frontier
}
}
//Print out that all possible nodes have been searched
Our group was tasked with choosing an algorithm to dictate the maze exploration of the robot. We believe that Depth First Search (DFS), would be the best algorithm to use in this case (as we have shown in through our simulation). However, due to time constraints, we were not able to get a DFS algorithm working on our robot. Therefore we tried implementing the wall sensing code with a multiplexer for our multiple analog signals in our robot.
In order to implement depth first search on our robot, we had to implement wall sensing at each of the grid’s intersections to provide wall information to the algorithm. Our code for wall sensing tried to accomplish acquiring correcting readings from the three proximity sensors on the front of our robot (one sensor facing left, one sensor facing right, one sensor facing forward) and appropriate testing of these readings. We choose to implement side facing sensors so that the robot could determine all wall locations without turning to use the front wall sensor. This choice helps us maximize for speed. We approached this in the following four steps (see function wall_locate() in the linked code):
Averaging of values collected from the proximity sensors: At each intersection, we analyzed the incoming data from left, forward, and right proximity sensors. These data values were averaged over 7 iterations so that outlier proximity sensor values did not affect the movement of the robot.
Determining the existence of a wall based on the differences between current and past values: At each intersection, the current wall sensor values are compared to the previous wall sensor values. If the difference between the wall sensor values is higher than 10 then this will be registered as a change in whether a wall was detected or not. As an example if the “current_average” on one sensor at the current intersection has a value 10 lower or higher than the past_average from the past intersection, the sensor will register a change of whether there is a wall in front of it. This change affected the variable wallFront for the front sensor, wallLeft for the left sensor, and wallRight for the right sensor. We negated these boolean variables every time this change was recorded.
Storing the wall information at a specific location: We stored wallFront, wallLeft, and wallRight into a byte variable called currentWallValue for more efficiency in our algorithm and in order to communicate wall information to the DFS() in bit form. Each wall corresponded to a value of 1, 2, or 4 in binary form. For example, if there were walls to the left, right, and forward of the robot this byte variable would be B111.
Testing: We implemented a testing algorithm (not the DFS()) to see if the robot was moving properly according to the current wall information. For example, if there were walls to the front, left, and right of the robot, the robot would have to turn around in order to evade the dead end. The robot’s full turn around was accomplished by calling the function turn_right() twice. Link to Wall Sensing Code.
The overall setup of the code functioned according the following flowchart: Move One is the function which prompted the robot to move one intersection forward. At the intersection, the robot would read the wall values as described in the previous section. Upon reading the wall values and determining the walls in the current location as previously described in step 3, the robot would turn in accordance to the wall locations as described previously in step 4. If the robot determined it should loop forward without turning, the code would loop back to move one. If the robot determined that it had searched all the possible locations, the robot would turn on an led. This last section has not been implemented in our code yet, but we plan to implement this section to display that the robot has explored the entire maze. The code for this section can be found at the same link as the wall sensing code.
We ran out of analog ports for the the sensors, so we decided to implement a mux (model 4051BC) to alternate reading between them – we connected the left wall sensors and the left/right front line sensors of the robot to the mux, and coded the robot to read the sensors as needed. A diagram of our wiring is below:
The total numbers of channels being transmitted to the mux is 4: the left and right IR sensors, and the front left/right line sensors. Pins 10 and 9 were the address bits for the mux, and determined which sensors to set to high and which to low; pin 3 connected the mux to Arduino analog A3, 5-7 to ground, and pin 16 to the Arduino 5V source. Our preliminary code for the mux (2-input only) is shown below:
int totalChannels = 2;
int addressA = 2;
int A = 0; //Address pin A
void setup() {
Serial.begin(9600);
// Prepare address pins for output
pinMode(addressA, OUTPUT);
// Prepare read pin
pinMode(A3, INPUT);
}
void loop() {
//Select each pin and read value
for(int i=0; i<2; i++){
A = bitRead(i,0); //Take first bit from binary value of i channel.
//Write address to mux
digitalWrite(addressA, A);
//Read and print value
Serial.print("Channel ");
Serial.print(i);
Serial.print(" value: ");
Serial.println(analogRead(A3));
}
delay(2000);
}
We did run into some issues once we added the multiplexer. Once we implemented the mux above on our robot, we started having problems for line detection and wall sensing. When we tested the robot in the maze, the line following became choppier than we have seen and the robot would only turn right. It seemed to detect walls and intersections periodically, so we believe that the problem stems from the numerous iterations that the robot is going through. We believe that this is an issue with the software side and think that it would be best to restart the implementation that we had set up. This way we will have less chance of failure, as we would be looking for any and all bugs periodically rather than merging existing code together.
In place of the LinkedList from Java, we will use a library QList from Arduino. Additionally, now that our robot is physically moving to different grids, at each “pop” of the stack, we will check to see if the “pop” is adjacent to the current location of the robot. If not, we will iterate through the nodesSearch until we find an adjacent grid that will take us to the new “popped” location.
Additionally, instead of reading in from a hard coded matrix containing information about the walls, we will be using wall information from the distance sensor readings.
In the future, we plan to implement a faster way for the robot to navigate back to a previous location when it reaches a “dead end.” This may include implementing Dijkstra’s algorithm.
Here. is a video of one of our attempts of getting the robot running. The robot can be seen sensing a wall, stopping because of the wall and turning around.
To improve the consistency of our robot, we worked to fix a majority of our wiring to make it significantly neater. We switched the breadboard to a (through hole) board with soldered on resistors for the pins which connected directly to the arduino and headers for the power and ground. We additionally switched the male to female wiring on all the line sensors from individual wires to a grouped male to female wire in a set of three for the power, ground and sensor output lines. These changes made our robot easier to debug and created less disconnection of wires.
We switched to storing our locations for DFS in an int array instead of using the QList because we were worried about memory. Now, we only have to save an integer array and an additional integer which serves as our “pointer” in the array. Every time we “pop” a location, we decrease the pointer value and every time we add to the “stack,” we write into the array and then increment the pointer.
Here is a video of our maze mapping.
For this milestone, we needed a system that could display the walls and treasure in a maze as the robot found them. We also needed our system to display a “done” signal on the screen, and play a “done” signal on the speaker when the maze was successfully mapped.
To do the treasure detection, we simply have to integrate the code and hardware from lab 2 into the DFS code. We copied the circuit from Lab 2 three times over (1 for each treasure detector implemented - left, front, and right) and connected three LEDs to the remaining digital outputs of the Arduino. We will have to reorganize our current mux to wire the additional treasure detectors, since we ran out of analog pins. Below are our new select signals and their corresponding outputs:
Select Bits | Input Number | Sensor Output |
---|---|---|
000 | Y0 | Left Front Line Sensor |
001 | Y1 | Right Front Line Sensor |
010 | Y2 | Left Wall Sensor |
011 | Y3 | Right Wall Sensor |
100 | Y4 | Front Treasure Detector |
101 | Y5 | Right Treasure Detector |
110 | Y6 | Left Treasure Detector |
111 | Y7 | Microphone |
We plan on connecting the microphone to the empty Y7 slot.
To incorporate the treasure detection in our main code, we will only need minor changes from the previous labs, since our Lab 2 code is already working. This will likely require two functions in our main code: treasure_detect and treasure_display. Treasure_detect will iterate through the analog pins 4-7 to detect if there was a treasure available and output 00 (no treasure), 01 (7 kHz), 10 (12 kHz), and 11 (17 kHz) depending on treasure availability and frequency. Treasure_display will light the appropriate LEDs as dictated by treasure_detect output .The fft bin number will also have to be changed, since it takes up a large amount of Arduino memory and we’re utilizing three treasure detectors instead of just one.
Our DFS code already handles the wall detection and converts the wall sensor data into something which can be sent to the FPGA in 4 bits where each bit represents the presence and absence of a wall. Additionally, at each “move_to” function, we are able to output the location of the robot in five bits.
We improved our Lab 4 FPGA Display by adding walls to each grid space in the 5 x 4 grid array. We took in wall data from the receiving side of the radio transmission on the Arduino through four additional GPIO pins on the FPGA.
We added memory array registers for the walls which were wall1, wall2, wall3, and wall4 for the top wall, bottom wall, left wall, and right wall respectively. This was the beginning and end of our memory array initialization:
wall1[0][0] = 8'b000_000_00;
wall1[0][1] = 8'b000_000_00;
wall1[0][2] = 8'b000_000_00;
…
wall4[3][2] = 8'b000_000_00;
wall4[3][3] = 8'b000_000_00;
wall4[3][4] = 8'b000_000_00;
Based on the binary values received from the four additional GPIO pins, we set the walls at each grid space utilizing the memory array above. We change the color from our screen background (black: 8’b000_000_00) to white (8’b111_111_11). Wall Determination at a Grid Point:
if (val == 1'b1) begin
if (wallFront) begin
wall1[grid_coord_x][grid_coord_y] = 8'b111_111_11;
end
if (wallBottom) begin
wall2[grid_coord_x][grid_coord_y] = 8'b111_111_11;
end
if (wallLeft) begin
wall3[grid_coord_x][grid_coord_y] = 8'b111_111_11;
end
if (wallRight) begin
wall4[grid_coord_x][grid_coord_y] = 8'b111_111_11;
end
end
Once we set the walls at a certain grid space, we displayed the updated representation of the maze by iterating over memory array within the double for loop we implemented in Lab 3:
//Upper Walls
if(((j * PIXEL_WIDTH + 10'd0 < PIXEL_COORD_X) && (PIXEL_COORD_X < (j + 10'd1) * PIXEL_WIDTH - 10'd5)) && ((i * PIXEL_HEIGHT < PIXEL_COORD_Y) && (PIXEL_COORD_Y < (i + 10'd1) * PIXEL_HEIGHT - 10'd5))) begin
PIXEL_COLOR = wall1[j][i];
end
The video here displays how our wall data is displayed on the screen.
Our above implementation does not currently include a “done” message; however, we plan to include this into our final design by adding another connection between the receiving arduino and fpga that will send a high value when the dfs() reaches the “all nodes searched” state which is included in our original dfs() function. The DFS already displays “all nodes searched” on the serial monitor screen when the algorithm is complete.
While we did not have the chance to implement the done signal using FPGA, we have determined our plan for implementation. The done signal will be displayed once the DFS algorithm has finished. To indicate that this being done, the robot’s current location square will turn purple which will signify that it has finished its search. This signal will be sent from the arduino on the robot that is running the DFS.
While we did not have the chance to implement the done sound, we have determined our plan for implementation. We will implement the same setup as used in lab 3, this includes an 8-bit DAC with the digital side wired to an FPGA and the analog side connected to the auxiliary jack. We will use the tri tone signal created in the lab to signify done. Please refer to lab 3 on our website for more information on the setup and code. The FPGA in use will be the same FPGA that is receiving the transmitted information from the robot and is used to display the map. This sound will be triggered by the same signal which will display the done signal.
The goal of this course was to build an intelligent physical system that could perceive, reason about, and evaluate its system surroundings. Although throughout this semester, our robot has not always been intelligent and barely physical or system-like, our final project works. The robot that we built is capable of navigating a maze efficiently, analyzing its surroundings walls, detecting treasures, starting at the beginning of a microphone tone and producing a done signal.
Our robot is approximately 6 x 4 x 5 in3, with wheels of 3-inch radii and a pink base. A photo is shown below.
Our chassis, wheels, ball caster, and mounts were the default designs provided to us at the beginning of the year and their cad files can be found on the course website. All other components pertaining to movement, treasure/wall/line detection, and radio transmission were also provided to us.
The robot was powered by two 9-volt batteries and an additional 5-volt battery. The first 9 volt battery supplied power to the Arduino, the wall sensors, and the line sensors; the second supplied power to the Servo motors that turned the robot’s wheels, and the final 5V power bank provided power to the microphone and phototransistors.
Our robot had 3 front grayscale line sensors in for line following, in order to make sure that the robot would have all 3 sensors on the black electrical tape. And if it was detecting that it would be off the tape, then the robot could correct its direction.
We also had two of these grayscale line sensors toward the middle and side of our robot. We did this in order to allow us to detect intersections. When those back sensors detected black tape, we would know that the robot is at an intersection and needs to make a decision of its next movement.
We decided to use two larger sized wheels that we found in lab at the beginning of the year because it allowed our robot to travel further for a given rotational speed of our parallax servos. And because the wheels were fairly thick, we were not worried about our turning capabilities.
We had our three wall sensors on the front of our robot in order to allow the robot to detect walls in front, to the left, and to the right of its position. We positioned the left and right wall sensors right above the front wall sensor because it seemed like the easiest place for us to mount them without having to do any 3D printing. We thought it would be seamless and was our best option as it just was able to detect the top portion of the walls.
The final costs of our robot are detailed here. Please note that we were not required to take into account the cost of the FPGA or Arduino Uno.
For clarity and convenience, we have decided to summarize our code below:
if( robot is too far right) { go left }
else if ( robot is too far left) { go right}
else { move straight)}
while ( left back and right back sensors were not on the black line) {
move();
}
robot stop;
// north=1, east = 2, south= 3, west= 4
// we used differences in the current and next directions to figure out how the robot should turn. Below is our code:
int direction_difference = curr_direction - next_direction;
if(direction_difference == -3){
turn_left();
}
else if(direction_difference == -1) {
turn_right();
}
else if(direction_difference == -2) {
turn_right();
turn_right();
}
else if(direction_difference == 3){
turn_right();}
else if(direction_difference == 2) {
turn_right();
turn_right();
}
else if(direction_difference == 1) {
turn_left();
}
int[] frontier; /// grid coordinates to visit
int[] nodesSearched; // grid coordinates
int[] path ;
writeWallInfo(); // reads in wall information
Interpret wall information and adds to frontier stack accordingly
move_to( pop from frontier);
while( frontier is not empty) {
writeWallInfo();
//Interpret wall information and adds to frontier stack accordingly
move_to( pop from frontier);
}
We used two continuous parallax servos to power out robot. Determining the servo values for moving straight and turning took some trial and error.
For line detection, we installed 5 line sensors - three in the front for line following, and two in the back for intersection detection. The left back line sensor was connected to A0 on the Arduino, the right back line sensor on A2, and the middle front line sensor on A3. All remaining line sensors were connected via mux to A4 of the Arduino.
When testing the sensors, we determined a threshold value of 760, for which the line sensor would either detect white space or a black line -- over 760 meant that the sensor was on a black line; under meant the sensor was only seeing white space.
In order to determine whether the robot was on a line, we implemented the following logic. If the conditions for being on a line was not met, the robot would loop through the logic until it was on a line:
Our readjustment code is shown below:
//if rightfront and middle sensor is white and leftfront is black, move left, right wheel faster
if(front_right<=760){
leftservo.write(93);
rightservo.write(84);
}
//if leftfront and middle sensor is white and rightfront is black, move right, left wheel faster
else if(front_left<=760){
leftservo.write(95);
rightservo.write(92);
}
else if (analogRead(A3)>650){ //if at least two sensors are black, move forward
leftservo.write(96);
rightservo.write(88);
}
We originally thought that we could determine intersection based on the front three sensors alone - if all three sensors saw a black line, then that would mean that the robot was at an intersection. However, the front sensors were positioned too closely on the robot to accurately pinpoint an intersection, and we decided to install an additional two sensors near the wheels. If the robot was on a line (as determined in the previous logic) and the back two sensors detected a black line, then the robot was considered at a cross section.
We used three long-distance IR sensors to detect left, front, and right walls around the robot. The left and right wall detectors were connected via mux to analog port A4 on the robot, and the front wall detector connected to port A5. Our logic to determine walls was similar to that of line detection: if the analogRead() value of the analog port was over a predetermined threshold value of 380, that meant that a wall was detected at that particular IR sensor; else, there was no wall.
When implementing wall detection, the robot often had issues accurately detecting a wall. This turned out to have multiple causes, the simplest of which involved wires hanging over the IR sensors and incorrectly sending wall-detected values to the robot. A second issue that came up later on during construction involved a faulty wire that no longer connected the Arduino to the sensor.
Our goal was to implement an algorithm to facilitate maze exploration on a 5x4 grid of this layout:
Our first step was to decide on how to “translate” a maze into code. We followed the advice of Team Alpha and chose to save information about the maze in two 5x4 matrices. One 5x4 matrix contains data on whether or not each location on the maze has or has not been explored (1 being unexplored and 0 being explored). Each index of the matrix corresponds to the respective coordinate on the real maze grid. The other matrix contains information about the walls in the maze. Each corresponding index of the matrix contains a decimal number (0 to 15). Each decimal number can be converted to a 4-bit binary number, where each bit will represent the presence (or absence) of a wall. A 1 indicates the absence of a wall and a 0 represents the presence of a wall. Please see Figure I for how we specified directions (“north,” “south” etc.) The bits are organized as follows:
[West] [East] [South] [North]
Ex. 0011 would mean there is a wall to the west and east of the robot. We chose to implement DFS because we believe this will allow the robot to navigate the maze more efficiently. If we were to implement breadth first search, the robot would spend too much time navigating back to an already visited coordinate on the grid.
Simulation:
We first approached implementing DFS in simulation. We “hardcoded” our wall information into the DFS algorithm and made sure the right series of grid coordinates were outputted.
Algorithm that facilitates maze exploration
Writing in Arduino
In Lab 3, we created a DFS simulation in Java. However, our current robot is incapable of interpreting Java, so it was necessary to convert the code into Arduino for the robot to read. At first, we considered using an Arduino library called QList. However, because our grid coordinates are numbers and we want to conserve memory, we decided to use integer matrices as our “stack.” To implement this, we saved our grid coordinated in this format:
[y coordinate, x coordinate, y coordinate, x coordinate, 0, 0…….]
Additionally, we saved an integer to serve as our “pointer.” In the matrix above, our pointer would be the number, 4, which points to which index in the matrix we would “push” or “pop” from next. Everytime we added or removed a coordinate, we would increment the pointer accordingly.
Implement a “backtrack”
As with our Java simulation, we used a frontier stack to save the coordinates we will search and a nodesSearched stack to save the coordinates we already searched. Additionally, we implemented a path stack to help us “backtrack” when our robot reaches a dead end. This was not needed in simulation, however, in real life our robot would not know how to backtrack from (3,3) to (1,1). This path stack saves the path of the robot and when the movePossible() function returns false for the robots current location and the next location on the frontier stack, the robot will know how to backtrack. You will be able to see our robot “backtracking” in the videos below.
Testing with hardcoded wall information first
In order to isolate the DFS algorithm and make sure it was working regardless of whether or not wall sensor values were accurate or not, we first hardcoded the wall information from the Java simulation. Once our robot was running as predicted. We added in the wall sensing values accordingly.
Adding in wall sensing
The challenge we faced with this was taking into account the direction of the robot. Fortunately, our team agreed early on that we would save the direction the robot was facing as a global variable early on. Below is our code for writing wall data ( and later sending) :
void writeWallInfo(){
// WESN
// W=+8
// E=+4
// S=+2
// N=+1
front=0;
right=0;
left=0;
wall_sense(); // reads in wall information from mux
if (wallFront> 390 ){ front=1;}
if (wallRight> 380){ right=1;}
if (wallLeft> 380) {left=1;}
// if direction= 1 north , then front = north , right equals east, left equals west, south equals 0
// if direction= 2 east , then front= east
// if direction= 3 south then front = south
// if direction= 4 west , then front= west
walls=15; // value if there were walls on each side
if(curr_direction==1){
walls= (front*1)+(left*8)+(right*4);
}
else if(curr_direction==2){ walls= (front*4)+(left*1)+(right*2);}
else if(curr_direction==3){ walls= (front*2)+(left*4)+(right*8);}
else if(curr_direction==4){ walls= (front*8)+(left*2)+(right*1);}
wall_map_real[curr_y][curr_x]=walls;
if(curr_y==4 && curr_x==3&& curr_direction==1){
/// usually we assume the “back” sensor reading would have shown no wall had we added a back sensor, however that is not the case for the starting location
wall_map_real[curr_y][curr_x]= (front*1)+(left*8)+4+2;
}
Please see the video of our DFS working here.
To detect a 660 Hz tone, we utilized an electret microphone (available here: https://www.adafruit.com/product/1063) that was provided to us. In the code, we took the FFT of the analogRead value of input A0 and, via experimentation, found that bin 2 returned the highest magnitude peak. Thus, to accurately detect an audio signal of 660, we coded the robot to take 10 FFTs in a row, and to start the robot if at least 5 FFTs had a peak at bin 2. A snippet of our code is shown below:
for (byte i = 0 ; i < FFT_N/2 ; i++) {
if (fft_log_out[i] > maximum) {
maximum = fft_log_out[i];
index = i;
}
if (i == 63) { //Checks what the maximum bin number was at the last bin
//(FFT_N/2 - 1)
Serial.println("start_time"+String(index));
if (index == 2) { //Increment start1
start1++;
}
Serial.println("times " + String(start1));
if (index != 2) { //Increment start2
other++;
}
if(start1 > 5) {
other =0;
}
if (start1 > 10 && other < 5) //A balance of bin 4's and 5's indicate a 660 Hz Signal. Shine the LED.
{
Serial.println("working");
start = 1;
}
Our goal was to have our robot detect “treasures” emitting from the walls at three different frequencies: 7, 12, and 17 kHz. In order to do this, we installed three phototransistors at the front, right, and left of the robot to detect frequencies; we then utilized a mux to query through each analog input and determine if a treasure was detected, and if so, at which frequency. A diagram of how it works is shown below:
In order to extract the frequency-domain information from the phototransistors that would detect the treasures, we had to first take the Fourier transform of the time-domain signal; this was accomplished using the Fast Fourier Transform library available here. The FFT algorithm, when applied, would take the time-domain signal, take the FFT of the signal, and return an array of elements that referred to the magnitude of the frequency component in that particular frequency bin.
For this particular robot, we decided to have the FFT return a 128-point array, since we ran into some memory issues with the robot, and the 128-point used significantly less memory than that of the 256-point FFT. Once the array was returned, we needed to check the bins at the frequency we wanted to detect the treasures.
After some trial and error, we determined that the optimal bins for detecting 7kHz treasures was between 22 and 24; for 12kHz, between 39-41, and for 17kHz, 55-57. A snippet of our code is below:
for ( i = 0; i < FFT_N/2 ; i++) {
if (fft_log_out[i] > maximum ) {
if (i > 5){
maximum = fft_log_out[i];
index = i;
}
}
if (i == 63) { //last bin
if (index == 22 || index == 23 || index == 24) { //7K
treasure = B00000001;
}
if (index == 39 || index == 40 || index == 41) { //12K
treasure = B00000010;
}
if (index == 55 || index == 56 || index == 57) { //17K
treasure = B00000011;
}
maximum = 0; //resets maximum checking at the end of the loop
index = 0;
}
}
An FPGA DE0-Nano was used to display the current position of the robot, wall information, treasure locations, and a signal indicating a completed exploration of the maze.
We used Quartus in creating and uploading programs to the fpga. By using VHDL in Quartus, we implemented code that programmed the fpga to accurately display incoming information with sequential and combinational logic.
The incoming information to the FPGA was put into registered initialized in the beginning of the program. Registers x1, x2, y1, y2, and y3 represent the current x, y location from most significant bit to least significant bit. Registers wallF, wallL, wallR, and wallB were used in deciding wall locations at the current position independent of the robot’s orientation. Registers treasure1 and treasure2 were used to determine if a grid square had no treasure or a 7K, 12K, or 17K frequency treasure.
We used this setup below with resistor values of 10K and 20K. These resistors values were chosen to match the current going down to the fpga so that noise pick-up and overvalued voltages could be avoided.
A two dimensional memory array called “gridscreen” was used in initializing unexplored grid squares. Two dimensional memory arrays “wall1,” “wall2,” “wall3,” and “wall4” were used in initializing wall colors at those locations for the top wall, bottom wall, left wall, and right wall respectively.
always @ (posedge CLOCK_50) begin
if (rdy == 0) begin
gridscreen[0][0] = 2'b00;
gridscreen[0][1] = 2'b00;
gridscreen[0][2] = 2'b00;
gridscreen[0][3] = 2'b00;
gridscreen[0][4] = 2'b00;
…
gridscreen[3][4] = 2'b00;
…
wall1[0][0] = 8'b000_000_00;
wall1[0][1] = 8'b000_000_00;
wall1[0][2] = 8'b000_000_00;
wall1[0][3] = 8'b000_000_00;
wall1[0][4] = 8'b000_000_00;
…
wall4[3][4] = 8'b000_000_00;
rdy = 1;
lastsquare_x = 3'b111;
end
The next part of the code below was used to update unexplored grid spaces to explored or currently located colors. An issue that was run into was a rare error that would display wrong grid or wall colors when the robot moved from one square to another. This was most likely caused by an unsynchronized relationship between the robot’s movements and the rapid clock rate (posedge clk 50) that this part of the code ran on.
In order to handle this issue, a val bit was used to make sure the program only changed values once the Arduino finished updating all its output values at a given time. The val bit would send from the Arduino to the FPGA through another GPIO connection. Once val was confirmed to be one. The program needed to iterate at least over 1000 times to set new values. This was to prevent rapid shifting changes that would contribute to the technical problems described.
Our program displayed the existence of certain treasures by coloring the walls with a specific color corresponding to the treasure frequency (7kHz=red, 12kHz=green, 17kHz=blue). This method works out because it would be impossible for there to be a treasure at a specific grid square without the presence of any walls. This part of the code was implemented by checking for a high value of wallFront for example, and updating the wall1 memory array accordingly based on the presence of a treasure.
if (val == 1'b1) begin
iterations = iterations + 1;
if (iterations > 1000) begin
iterations = 0;
if (gridscreen[0][0] == 2'b10 || gridscreen[0][0] == 2'b01) begin
gridscreen[0][0] = 2'b10;
end
if (gridscreen[0][1] == 2'b10 || gridscreen[0][1] == 2'b01) begin
gridscreen[0][1] = 2'b10;
end
…
if (gridscreen[3][4] == 2'b10 || gridscreen[3][4] == 2'b01) begin
gridscreen[3][4] = 2'b10;
end
if (finito == 1'b1) begin
gridscreen[grid_coord_x][grid_coord_y] = 2'b11;
end
else begin
gridscreen[grid_coord_x][grid_coord_y] = 2'b01;
end
if (wallFront) begin
if (sevenK == 1'b1) begin
wall1[grid_coord_x][grid_coord_y] = 8'b111_000_00; //Red
end
else if (twelveK == 1'b1) begin
wall1[grid_coord_x][grid_coord_y] = 8'b000_111_00; //Green
end
else if (seventeenK == 1'b1) begin
wall1[grid_coord_x][grid_coord_y] = 8'b000_000_11; //Blue
end
else begin
wall1[grid_coord_x][grid_coord_y] = 8'b011_011_01; //White
end
end
end
In order to make our program more efficient and adaptable for future uses (e.g. more grid spaces with images), we decided to implement a double “for” loop that sequenced over the two dimensional memory arrays. The display of walls and an image at each square was done by coloring squares over each other. For example, a left wall would be displayed by creating a square with a slightly larger width to the left, and the grid square’s color (with normal width) corresponding to whether it was explored or explored would be colored over this square.
In addition, we decided to use images in our final fpga code to make our display more captivating. This was done by using a bitmap that had a one dimensional array of hexadecimal values that each had a color value for each pixel. The line of code PIXEL_COLOR = imgdisplay[(PIXEL_COORD_X << 7) + PIXEL_COORD_Y]
was vital in display the image correctly as the bitmap was only one column long, and does not automatically account for a two dimensional space. PIXEL_COORD_X was shifted to the left by 7 bitwise placeholders (equivalent to multiplying by 128) because the texture used was 128 pixels long on the x axis. Thus, 128 values of the bitmap represents one column in the y axis.
The line "$readmemh("test.txt", imgdisplay)" was coded in the initialization in order for VHDL to read the bitmap.
PIXEL_WIDTH = 10'd128;
PIXEL_HEIGHT = 10'd95;
if ((PIXEL_COORD_X < 2 * PIXEL_WIDTH) && (PIXEL_COORD_Y < 2 * PIXEL_HEIGHT)) begin
for (i = 10'd0; i <= 10'd1; i = i + 10'd1) begin
for (j = 10'd0; j <= 10'd1; j = j + 10'd1) begin
//Upper Walls
if(((j * PIXEL_WIDTH + 10'd0 < PIXEL_COORD_X) && (PIXEL_COORD_X < (j + 10'd1) * PIXEL_WIDTH - 10'd5))&&((i * PIXEL_HEIGHT < PIXEL_COORD_Y) && (PIXEL_COORD_Y < (i + 10'd1) * PIXEL_HEIGHT - 10'd5))) begin
PIXEL_COLOR = wall1[j][i];
end
…
//Image Display
if(((j * PIXEL_WIDTH + 10'd5 < PIXEL_COORD_X) && (PIXEL_COORD_X < (j + 10'd1) * PIXEL_WIDTH - 10'd5))&& ((i * PIXEL_HEIGHT + 10'd5 < PIXEL_COORD_Y) && (PIXEL_COORD_Y < (i + 10'd1) * PIXEL_HEIGHT - 10'd5))) begin
PIXEL_COLOR = imgdisplay[(PIXEL_COORD_X << 7) + PIXEL_COORD_Y];
end
//Explored, Unexplored, or Currently Located
if(((j * PIXEL_WIDTH + 10'd52 < PIXEL_COORD_X) && (PIXEL_COORD_X < (j + 10'd1) * PIXEL_WIDTH - 10'd52))&& ((i * PIXEL_HEIGHT + 10'd35 < PIXEL_COORD_Y) && (PIXEL_COORD_Y < (i + 10'd1) * PIXEL_HEIGHT - 10'd35))) begin
if (gridscreen[j][i] == 2'b01) begin
PIXEL_COLOR = 8'b001_111_01;
end
else if (gridscreen[j][i] == 2'b10) begin
PIXEL_COLOR = 8'b000_000_00;
end
else if (gridscreen[j][i] == 2'b11) begin
PIXEL_COLOR = 8'b100_100_01;
end
end
end
else begin
PIXEL_COLOR = 8'b000_000_00;
end
end
The adaptability of this code was be helpful when displaying a maze for the final competition. The memory arrays were easily updated to display image files instead of colors and several control statements were easily added inside the body of the double for loop in order to identify and display which grid space the robot is located in real time, the wall information for that grid space, and treasure information.
The efficiency of the code mitigated potential screen latency when displaying the robot’s location on the screen. This is because our simplified iterative system consists of a lower amount of data storages and calculations, conserving memory and computational power.
Even though our treasure detection hardware failed before our competition, the following simulation demonstrates how our display would have looked with treasure detection. (Note: the beige walls may be hard to see). The simulation was implemented by an RF transmission of hard coded values from the DFS() function.
In the competition, information was communicated wirelessly through a transmitting radio on the robot and a receiving radio connected to our FPGA basestation.
A transmitter() function was used in the main code for the transmitting arduino on the robot. This was to ensure that information could be transmitted at the right time in the DFS() function by simply calling the transmitter() function.
Two bytes “loc” and “loc2” had to be used in order to transmit the current position, wall information, and treasure information. The current position was stored in “loc.” The treasure information held in two bits was shifted to the left four times to account for the four bits representing wall information (front wall, left wall, right wall, back wall) in “loc2.” The radio.write() function transmitted the data. Code taken from the RF_master example code was also utilized in case a timeout occurred if the arduino on the other end did not receive the data.
void transmitter(byte loc, byte loc2){
// First, stop listening so we can talk.
radio.stopListening();
// Take the time, and send it. This will block until complete
//5 bits for current position: first 3 are y, next 2 are x
//bits are arranged from most significant bit to least significant bit
//4 bits for wall data: west, east, south, north respectivelly
byte value = loc;
byte value2 = loc2;
testing = testing + 1
printf("Now sending %lu...",value2);
bool ok = radio.write( &value, sizeof(unsigned long) );
bool ok2 = radio.write( &value2, sizeof(unsigned long) );
if (ok)
printf("ok...");
else
printf("failed.\n\r");
// Now, continue listening
radio.startListening();
// Wait here until we get a response, or timeout (250ms)
unsigned long started_waiting_at = millis();
bool timeout = false;
while ( ! radio.available() && ! timeout )
if (millis() - started_waiting_at > 200 )
timeout = true;
// Describe the results
if ( timeout )
{
printf("Failed, response timed out.\n\r");
}
else
{
// Grab the response, compare, and send to debugging spew
byte got_time;
radio.read( &got_time, sizeof(unsigned long) );
// Spew it
printf("Got response %lu, round-trip delay: %lu\n\r",got_time,millis()-got_time);
}
// Try again 1s later
delay(1000);
}
The receiving arduino was used to deliver the transmitted data to the FPGA. Each digital pin had to be set to low before they were wired to “high” or “low.” This was to ensure that data was only transmitted to the FPGA once all the digital outputs are decided regardless of delays between receiving packets of data. The val bit was also incorporated to ensure that the FPGA was receiving the right bits at the right time as described in the FPGA section. The bitRead() function was heavily used to read each individual bit of the two bytes that were received. A section of the receiving arduino code is as follows:
if ( radio.available() )
{
// Dump the payloads until we've gotten everything
unsigned long got_time;
unsigned long got_time2;
bool done = false;
bool done2 = false;
int val;
digitalWrite(7, LOW);
digitalWrite(5, LOW);
digitalWrite(4, LOW);
digitalWrite(3, LOW);
digitalWrite(2, LOW);
digitalWrite(A2, LOW);
digitalWrite(A1, LOW);
digitalWrite(A3, LOW);
digitalWrite(A0, LOW);
digitalWrite(A4, LOW);
digitalWrite(A5, LOW);
if (val == 0){
digitalWrite(7, bitRead(got_time, 2));
digitalWrite(5, bitRead(got_time, 3));
digitalWrite(4, bitRead(got_time, 4));
digitalWrite(3, bitRead(got_time, 0));
digitalWrite(2, bitRead(got_time, 1));
digitalWrite(A1, bitRead(got_time2, 0)); //North
digitalWrite(A3, bitRead(got_time2, 2)); //East
digitalWrite(A0, bitRead(got_time2, 1)); //South
digitalWrite(A2, bitRead(got_time2, 3)); //West
digitalWrite(A4, bitRead(got_time2, 5)); //Treasure1
digitalWrite(A5, bitRead(got_time2, 4)); //Treasure2
val = 1;
delay(100);
digitalWrite(6, HIGH);
}
delay(100);
digitalWrite(6, LOW);
val = 0;
// Now, resume listening so we catch the next packets.
radio.startListening();
}
A done signal was transmitted once all the nodes were searched in the DFS() function. The transmitting arduino sent the following radio packet once it was finished mapping the maze:
wallsarecool = B00110000; //Sends impossible scenario: treasures with no walls
transmitter(loc, wallsarecool);
As commented, this byte of information would be impossible during DFS() because the robot cannot detect treasures in grid squares without walls (unless the treasure proximity is too high). Thus, we used these bit values so that the FPGA could recognize a done signal.
Once the FPGA recognized this signal, it colored the small square at the robot’s current position yellow on the display. In addition, it outputted a sine wave from a 255 element array in the VHDL code. These values were assigned to the following GPIO pins:
These connections would go through an analog to digital converter which would output a mono signal to a 3.5 mm auxiliary jack connected to the lab speakers.
Here is a video of our FPGA simulation, complete with treasure detection and done signal.
Our complete basestation including the FPGA, voltage dividers for each connection, receiving arduino, and stereo connection is shown below:
We had trouble with milestone 3 and 4 so there was a lot left to be accomplished. First off, for sanity check, we installed a PCB (for the mux and amplifier) on the robot to clean up the wiring. A diagram and schematic are shown below: We also reorganized our mux:
Select Bits | Input Number | Sensor Output |
---|---|---|
000 | Y0 | Front Treasure Detector |
001 | Y1 | Left Front Line Sensor |
010 | Y2 | Right Front Line Sensor |
011 | Y3 | Right Wall Sensor |
100 | Y4 | Left Wall Sensor |
101 | Y5 | Right Treasure Detector |
110 | Y6 | Left Treasure Detector |
111 | Y7 | Microphone |
We attempted to implement treasure detection; we provide the following videos to show that our treasure detection works individually, outside of the robot:
We also added microphone detection. A video of it running on the robot can be seen here.
Merging the code proved to be challenging, and as a result, neither microphone nor treasure detection was available on competition day. The FFT libraries interfered with the Servo motors, and whenever both libraries were used, the robot would refuse to move.
Our treasure detection and microphone was not working on the day of the competition. These two aspects of
our robot worked individually.
It should also be noted that we attempted to merge the treasure, microphone, and DFS code the night before the
competition, which led to a complete mechanical meltdown. The robot refused to move, and several of the wall and line
sensors were not sensing properly. We spent most of the night before the competition debugging the robot, and after we
managed to get DFS and radio transmission working again, decided that it was too risky to try and implement the treasure
and microphone detectors. We did consider re-merging the treasure detectors, but later on discovered that our 9V battery had
died, and a trip to 7-11 determined that an unnamed individual had already bought all the 9V batteries the store had in stock.
Without the battery to power the treasure detectors, we were ultimately unable to implement the treasure detectors in time for
the competition.
Come competition day, we placed 8th out of 18 teams, which is slightly above average. Here's a video of it running.
Although we didn't manage to get into the finals, the fact that we were able to have a running robot that could transmit radio data to the FPGA and complete a maze
was in itself impressive to us, considering the fact that 5 hours previously, the robot wasn't even running properly. Perhaps
we're not quite "The Little Arduino That Could", but at the very least, we can be "The Little Arduino That Tried".
We honestly could not be more proud of each other and the amount of work, time, and effort that was put into this robot.
A giant thank you to everyone who was part of this team, and to the ECE 3400 instructor team!
A word to future teams: think about the final design of your robot early on, and build around it. Don't try to power the servos with your Arduino, remember
that there are only 6 analog pins on your Arduino, merge all of your code well in advance of the competition so as to get a head start on debugging,
and don't procrastinate.
Furthermore, remember that this a team effort, so treat it as such. We would not have made it this far if we didn't work on this
as a team.