@ajb2k3 Thanks for your support, we will share more interesting projects in the future.If you want a mechArm, please contact us!
Enjoy robots world
Posts made by ElephantRobotics
RE: Facial Recognition and Tracking Project with mechArm M5stack
RE: Facial Recognition and Tracking Project with mechArm M5stack
@pengyuyan Of course! You need to do some modified to the code！
RE: Facial Recognition and Tracking Project with mechArm M5stack
This item is reproduced from a user project
Facial Recognition and Tracking Project with mechArm M5stack
Long time no see, I'm back.
I'll give a report on the recent progress of the facial recognition and tracking project. For those who are new, let me briefly introduce what I am working on. I am using a desktop six-axis robotic arm with a camera mounted on the end for facial recognition and tracking. The project consists of two modules: one for facial recognition, and the other for controlling the movement of the robotic arm. I've previously discussed how the basic movement of the robotic arm is controlled and how facial recognition is implemented, so I won't go into those details again. This report will focus on how the movement control module was completed."
mechArm 270M5Stack, camera
Details of the equipment can be found in the previous article.
Motion control module
Next, I'll introduce the movement control module.
In the control module, the common input for movement control is the absolute position in Cartesian space. To obtain the absolute position, a camera and arm calibration algorithm, involving several unknown parameters, is needed. However, we skipped this step and chose to use relative displacement for movement control. This required designing a sampling movement mechanism to ensure that the face's offset is completely obtained in one control cycle and the tracking is implemented.
Therefore, to quickly present the entire function, I did not choose to use the hand-eye calibration algorithm to handle the relationship between the camera and arm. Because the workload of hand-eye calibration is quite large.
The code below shows how to obtain parameters from the information obtained by the facial recognition algorithm.
_, img = cap.read() # Converted to grey scale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Detecting faces faces = face_cascade.detectMultiScale(gray, 1.1, 4) # Drawing the outline for (x, y, w, h) in faces: if w > 200 or w < 80: #Limit the recognition width to between 80 and 200 pixels continue cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 3) center_x = (x+w-x)//2+x center_y = (y+h-y)//2+y size_face = w
The obtained variables, center_x, center_y, and size_face, are used to calculate the position. Below is the code for the algorithm that processes the data to control the movement.
run_num = 20 #Control cycle of 20 frames if save_state == False: # Save a start point (save_x, save_y) save_x = center_x save_y = center_y save_z = size_face origin_angles = mc.get_angles() print("origin point = ", save_x, save_y, origin_angles) time.sleep(2); current_coords = mc.get_coords() save_state = TRUE else: if run_count > run_num: # Limit the control period to 20 frames run_count = 0 # Recording relative offsets error_x = center_x - save_x error_y = center_y - save_y error_z = size_face - save_z # Pixel differences are converted into actual offsets, which can be scaled and oriented trace_1 = -error_x * 0.15 trace_z = -error_y * 0.5 trace_x = -error_z * 2.0 # x/z axis offset, note that this is open loop control current_coords += trace_z current_coords += trace_x #Restricting the Cartesian space x\z range if current_coords < 70: current_coords = 70 if current_coords > 150: current_coords = 150 if current_coords < 220: current_coords = 220 if current_coords > 280: current_coords = 280 # Inverse kinematic solutions x = current_coords z = current_coords # print(x, z) L1 = 100; L3 = 96.5194; x = x - 56.5; z = z - 114; cos_af = (L1*L1 + L3*L3 - (x*x + z*z))/(2*L1*L3); cos_beta = (L1*L1 - L3*L3 + (x*x + z*z))/(2*L1*math.sqrt((x*x + z*z))); reset = False # The solution is only applicable to some poses, so there may be no solution if abs(cos_af) > 1: reset = True if reset == True: current_coords -= trace_z current_coords -= trace_x print("err = ",cos_af) continue af = math.acos(cos_af); beta = math.acos(cos_beta); theta2 = -(beta + math.atan(z/x) - math.pi/2); theta3 = math.pi/2 - (af - math.atan(10/96)); theta5 = -theta3 - theta2; cof = 57.295 #Curvature to angle move_juge = False # Limits the distance travelled, where trace_1 joint is in ° and trace_x/z is in mm if abs(trace_1) > 1 and abs(trace_1) < 15: move_juge = True if abs(trace_z) > 10 and abs(trace_z) < 50: move_juge = True if abs(trace_x) > 25 and abs(trace_x) < 80: move_juge = True if (move_juge == True): print("trace = ", trace_1, trace_z, trace_x) origin_angles += trace_1 origin_angles = theta2*cof origin_angles = theta3*cof origin_angles = theta5*cof mc.send_angles(origin_angles, 70) else: #Due to the open-loop control, if no displacement occurs the current coordinate value needs to be restored current_coords -= trace_z current_coords -= trace_x else: # 10 frames set aside for updating the camera coordinates at the end of the motion if run_count < 10: save_x = center_x save_y = center_y save_z = size_face run_count += 1
In the algorithm module, after obtaining the relative displacement, how to move the arm? To ensure the movement effect, we did not directly use the coordinate movement interface provided by Mecharm, but instead added the inverse kinematics part in python. For the specific posture, we calculated the inverse solution of the robotic arm and transformed the coordinate movement into angle movement to avoid singular points and other factors that affect the Cartesian space movement. Combining the code of the facial recognition part, the entire project is completed.
Let's look at the results together.
Normally, facial recognition has high computational requirements. Its algorithm mechanism repeatedly calculates adjacent pixels to increase recognition accuracy. We use MechArm 270-Pi, which uses a Raspberry Pi 4B as the processor for facial recognition. The computing power of the Raspberry Pi is 400MHZ. Due to the insufficient computing power of the Raspberry Pi, we simplified the process and changed the recognition mechanism to only a few times of fuzzy recognition. In our application, the background needs to be simpler."
The facial recognition and robotic arm tracking project is completed.
Key information about the project:
● In the case of low computing power, set a simple usage scenario to achieve smooth results
● Replace complex hand-eye calibration algorithms with relative position movement and use a sampling movement mechanism to ensure that the face's offset is completely obtained in one control cycle and the tracking is implemented.
● In python, added the inverse kinematics part, calculated the inverse solution of the robotic arm for specific postures, and converted the coordinate movement into angle movement to avoid singular points and other factors that affect the Cartesian space movement.
Some shortcomings of the project:
● There are certain requirements for the usage scenario, and a clean background is needed to run successfully (by fixing the scene, many parameters were simplified)
● As mentioned earlier, the computing power of the Raspberry Pi is insufficient, using other control boards, such as Jetson Nano (600MHZ) or high-performance image processing computers, would run smoother.
● Also, in the movement control module, because we did not do hand-eye calibration, only relative displacement can be used. The control is divided into "sampling stage" and "movement stage". Currently, it is preferable to require the lens to be stationary during sampling, but it is difficult to ensure that the lens is stationary, resulting in deviation in the coordinates when the lens is also moving during sampling.
Finally, I would like to specially thank Elephant Robotics for their help during the development of the project, which made it possible to complete it. The MechArm used in this project is a centrally symmetrical structured robotic arm with limitations in its joint movement. If the program is applied to a more flexible myCobot, the situation may be different.
If you have any questions about the project, please leave me a message below.
RE: Exploring the Advantages and Differences of Different Types of Robotic Arms in AI Kit
Thank you. If you had the choice, which robotic arm would you choose?
Exploring the Advantages and Differences of Different Types of Robotic Arms in AI Kit
This article is primarily about introducing 3 robotic arms that are compatible with AI Kit. What are the differences between them?
If you have a robotic arm, what would you use it for? Simple control of the robotic arm to move it around? Repeat a certain trajectory? Or allow it to work in the industry to replace humans? With the advancement of technology, robots are frequently appearing around us, replacing us in dangerous jobs and serving humanity. Let's take a look at how robotic arms work in an industrial setting.
what is AI Kit？
The AI Kit is an entry-level artificial intelligence Kit that integrates vision, positioning, grasping, and automatic sorting modules. Based on the Linux system and built-in ROS with a 1:1 simulation model, the AI Kit supports the control of the robotic arm through the development of software, allowing for a quick introduction to the basics of artificial intelligence.
Currently, AI Kit can achieve color and image recognition, automatic postioning and sorting. This Kit is very helpful for users who are new to robotic arms and machine vision, as it allows you to quickly understand how artificial intelligence projects are built and learn more about how machine vision works with robotic arms.
Next, let's briefly introduce the 3 robotic arms that are compatible with the AI Kit.
The AI Kit can be adapted for use with myPalletizer 260 M5Stack, myCobot 280 M5Stack and mechArm 270 M5Stack.All three robotic arms are equipped with the M5Stack-Basic and the ESP32-ATOM.
myPalletizer260 is a lightweight 4-axis robotic arm, it is compact and easy to carry. The myPalletizer weighs 960g, has a 250g payload, and has a working radius of 260mm. It is explicitly designed for makers and educators and has rich expansion interfaces.
mechArm 270 is a small 6-axis robotic arm with a center-symmetrical structure (like an industrial structure). The mechArm 270 weighs 1kg with a payload of 250g, and has a working radius of 270mm. As the most compact collaborative robot, mechArm is small but powerful.
myCobot 280 is the smallest and lightest 6-axis collaborative robotic arm (UR structure) in the world, which can be customized according to user needs. The myCobot has a self-weight of 850g, an effective load of 250g, and an effective working radius of 280mm. It is small but powerful and can be used with various end effectors to adapt to various application scenarios, as well as support the development of software on multiple platforms to meet the needs of various scenarios, such as scientific research and education, smart home, and business pre R&D.
Let's watch a video to see how AI Kit works with these 3 robotic arms.
The video shows the color recognition and intelligent sorting function, as well as the image recognition and intelligent sorting function. Let's briefly introduce how AI Kit is implemented (using the example of the color recognition and intelligent sorting function).
This artificial intelligence project mainly uses two modules:
●Vision processing module
●Computation module (handles the conversion between eye to hand)
Vision processing module
OpenCV (Open Source Computer Vision) is an open-source computer vision library used to develop computer vision applications. OpenCV includes a large number of functions and algorithms for image processing, video analysis, deep learning based object detection and recognition, and more.
We use OpenCV to process images. The video from the camera is processed to obtain information from the video such as color, image, and the plane coordinates (x, y) in the video. The obtained information is then passed to the processor for further processing.
Here is part of the code to process the image (colour recognition)
# detect cube color def color_detect(self, img): # set the arrangement of color'HSV x = y = 0 gs_img = cv2.GaussianBlur(img, (3, 3), 0) # Gaussian blur # transfrom the img to model of gray hsv = cv2.cvtColor(gs_img, cv2.COLOR_BGR2HSV) for mycolor, item in self.HSV.items(): redLower = np.array(item) redUpper = np.array(item) # wipe off all color expect color in range mask = cv2.inRange(hsv, item, item) # a etching operation on a picture to remove edge roughness erosion = cv2.erode(mask, np.ones((1, 1), np.uint8), iterations=2) # the image for expansion operation, its role is to deepen the color depth in the picture dilation = cv2.dilate(erosion, np.ones( (1, 1), np.uint8), iterations=2) # adds pixels to the image target = cv2.bitwise_and(img, img, mask=dilation) # the filtered image is transformed into a binary image and placed in binary ret, binary = cv2.threshold(dilation, 127, 255, cv2.THRESH_BINARY) # get the contour coordinates of the image, where contours is the coordinate value, here only the contour is detected contours, hierarchy = cv2.findContours( dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) if len(contours) > 0: # do something about misidentification boxes = [ box for box in [cv2.boundingRect(c) for c in contours] if min(img.shape, img.shape) / 10 < min(box, box) < min(img.shape, img.shape) / 1 ] if boxes: for box in boxes: x, y, w, h = box # find the largest object that fits the requirements c = max(contours, key=cv2.contourArea) # get the lower left and upper right points of the positioning object x, y, w, h = cv2.boundingRect(c) # locate the target by drawing rectangle cv2.rectangle(img, (x, y), (x+w, y+h), (153, 153, 0), 2) # calculate the rectangle center x, y = (x*2+w)/2, (y*2+h)/2 # calculate the real coordinates of mycobot relative to the target if mycolor == "red": self.color = 0 elif mycolor == "green": self.color = 1 elif mycolor == "cyan" or mycolor == "blue": self.color = 2 else: self.color = 3 if abs(x) + abs(y) > 0: return x, y else: return None
Just obtaining image information is not enough, we must process the obtained data and pass it on to the robotic arm to execute commands. This is where the computation module comes in.
NumPy (Numerical Python) is an open-source Python library mainly used for mathematical calculations. NumPy provides many functions and algorithms for scientific calculations, including matrix operations, linear algebra, random number generation, Fourier transform, and more. We need to process the coordinates on the image and convert them to real coordinates, a specialized term called eye to hand. We use Python and the NumPy computation library to calculate our coordinates and send them to the robotic arm to perform sorting.
Here is part of the code for the computation.
while cv2.waitKey(1) < 0: # read camera _, frame = cap.read() # deal img frame = detect.transform_frame(frame) if _init_ > 0: _init_ -= 1 continue # calculate the parameters of camera clipping if init_num < 20: if detect.get_calculate_params(frame) is None: cv2.imshow("figure", frame) continue else: x1, x2, y1, y2 = detect.get_calculate_params(frame) detect.draw_marker(frame, x1, y1) detect.draw_marker(frame, x2, y2) detect.sum_x1 += x1 detect.sum_x2 += x2 detect.sum_y1 += y1 detect.sum_y2 += y2 init_num += 1 continue elif init_num == 20: detect.set_cut_params( (detect.sum_x1)/20.0, (detect.sum_y1)/20.0, (detect.sum_x2)/20.0, (detect.sum_y2)/20.0, ) detect.sum_x1 = detect.sum_x2 = detect.sum_y1 = detect.sum_y2 = 0 init_num += 1 continue # calculate params of the coords between cube and mycobot if nparams < 10: if detect.get_calculate_params(frame) is None: cv2.imshow("figure", frame) continue else: x1, x2, y1, y2 = detect.get_calculate_params(frame) detect.draw_marker(frame, x1, y1) detect.draw_marker(frame, x2, y2) detect.sum_x1 += x1 detect.sum_x2 += x2 detect.sum_y1 += y1 detect.sum_y2 += y2 nparams += 1 continue elif nparams == 10: nparams += 1 # calculate and set params of calculating real coord between cube and mycobot detect.set_params( (detect.sum_x1+detect.sum_x2)/20.0, (detect.sum_y1+detect.sum_y2)/20.0, abs(detect.sum_x1-detect.sum_x2)/10.0 + abs(detect.sum_y1-detect.sum_y2)/10.0 ) print ("ok") continue # get detect result detect_result = detect.color_detect(frame) if detect_result is None: cv2.imshow("figure", frame) continue else: x, y = detect_result # calculate real coord between cube and mycobot real_x, real_y = detect.get_position(x, y) if num == 20: detect.pub_marker(real_sx/20.0/1000.0, real_sy/20.0/1000.0) detect.decide_move(real_sx/20.0, real_sy/20.0, detect.color) num = real_sx = real_sy = 0 else: num += 1 real_sy += real_y real_sx += real_x
The AI Kit project is open source and can be found on GitHub.
After comparing the video, content, and code of the program, it appears that the 3 robotic arms have the same framework and only need minor modifications to the data to run successfully.
There are roughly two main differences between these 3 robotic arms.
One is comparing the 4- and 6-axis robotic arms in terms of their practical differences in use (comparing myPalletizer to mechArm/myCobot).
Let's look at a comparison between a 4-axis robotic arm and a 6-axis robotic arm.
From the video, we can see that both the 4-axis and 6-axis robotic arms have a sufficient range of motion in the AI Kit's work area. The main difference between them is that myPalletizer has a simple and quick start process with only 4 joints in motion, allowing it to efficiently and steadily perform tasks, while myCobot requires 6 joints, two more than myPalletizer, resulting in more calculations in the program and a longer start time (in small scenarios).
In summary, when the scene is fixed, we can consider the working range of the robotic arm as the first priority when choosing a robotic arm. Among the robotic arms that meet the working range, efficiency and stability will be necessary conditions. If there is an industrial scene similar to our AI Kit, a 4-axis robotic arm will be the first choice. Of course, a 6-axis robotic arm can operate in a larger space and can perform more complex movements. They can rotate in space, while a 4-axis robotic arm cannot do this. Therefore, 6-axis robotic arms are generally more suitable for industrial applications that require precise operation and complex movement.
The second thing is that both are 6-axis robotic arms, and their main difference is the structure. mechArm is a centralized symmetrical structure robotic arm, and myCobot is a UR structure collaborative robotic arm. We can compare the differences between these two structures in actual application scenarios.
Here are the specifications of the two robotic arms.
The difference in structure between these two leads to a difference in their range of motion. Taking mechArm as an example, the centrally symmetrical structure of the robotic arm is composed of 3 pairs of opposing joints, with the movement direction of each pair of joints being opposite. This type of robotic arm has good balance and can offset the torque between joints, keeping the arm stable.
Shown in the video, mechArm is also relatively stable in operation.
You may now question, is myCobot not useful then? Of course not, the UR structure robot arm is more flexible and can achieve a larger range of motion, suitable for larger application scenarios. myCobot's more important point is that it is a collaborative robot arm, it has good human-robot interaction ability and can collaborate with humans for work. 6-axis collaborative robot arms are usually used in logistics and assembly work on production lines, as well as in medical, research, and education fields.
As stated at the beginning, the difference between these 3 robotic arms included in the AI Kit is essentially how to choose a suitable robotic arm to use. If you are choosing a robotic arm for a specific application, you will need to take into consideration factors such as the working radius of the arm, the environment in which it will be used, and the load capacity of the arm.
If you are looking to learn about robotic arm technology, you can choose a mainstream robotic arm currently available on the market to learn from. MyPalletizer is designed based on a palletizing robotic arm, mainly used for palletizing and handling goods on pallets. mechArm is designed based on a mainstream industrial robotic arm, which has a special structure that keeps the arm stable during operation. myCobot is designed based on a collaborative robotic arm, which is a popular arm structure in recent years, capable of working with humans and providing human strength and precision.
That's all for this post, if you like this post, please leave us a comment and a like!
We have published an article detailing the differences between mechArm and myCobot.Please click on the link if you are interested in learning more.
RE: myCobot 280-Ard conveyor control in an industrial simulation
Not quite correct, it is the computer that is needed to transfer the data and send the commands to mycobot.
myCobot 280-Ard conveyor control in an industrial simulation
This article was created by "Cang Hai Xiao" and is reproduced with the author's permission.
This article starts with a small example of experiencing a conveyor belt in a complete industrial scene.
A small simulated industrial application was made with myCobot 280-Arduino Python API and load cells. I use the toy conveyor belt to transfer the parts to the weighing pan and use M5Stack Basic as the host to build an electronic scale to weigh the parts coming off the conveyor belt and then transmit the weight information in the weighing pan to the PC through UART.
Using the PC as the host computer, a simple GUI was written in python to display the weights in real time and allow the user to modify the weighing values.
Here is a detailed video of how this project works.
The following is the detailed process of the project.
Burn Mega 2560 and ATOM firmware.(Check the Gitbook for details)
Write a weighing program and upload the program to M5Stack Basic.
The serial port is initialized and the connection mode is set. Establishing communication between the PC and M5stack Basic
Calculating ratio factors.The data read from the sensor using the M5Stack Basic are initial data and need to be calibrated with a 100g weight and a 200g weight to calculate the conversion factor into "grams". In this case we have calculated a ratio factor of -7.81.
Calculate the readings from the load cell and the conversion factor, and display as the weighing value.
Use UART1 to send the data in every 20ms. It is recommended to do an average or median filter to reduce the shock during the drop of the part from the hopper.
This is the event corresponding to the zero button, 100ms for button de-jitter
This is a simple electronic scale program written for UIFlow. It can also be sent to a PC via uart1 via TTL-USB and written to M5Stack Basic with a single click on Download. I have used the offline version of UIFlow for ease of connection and debugging.
Use myBlockly to debug the parameters for the press (drop arm) and release (lift arm) actions
Writing PC programs and installing pymyCobot.
(1) First, write the GUI interface by the Tkinter library. We can set the threshold for the weighing control. For example, in this commissioning, I set 5g.
(2) Importing pymycobot
(3) A callback to the OK button first allows the myCobot drop arm to power on the conveyor, the conveyor starts working, and the electronic scale monitors the weight in real time. The loading() function is responsible for reading the serial weighing data. Then determine if the threshold is reached and control the myCobot lift arm if the threshold is reached.
#============ # Function： # 1.Setting of the weighing values, displayed in the GUI. # 2.Use the processing bar to show the progress of the weighing # 3.When the target value of 99% is reached, a command is given to # myCobot to perform a stop operation. # date: 2022-11-10 # version: 0.2 # Joint Adjustment：Combined with the myCobot button and release #action #============ from tkinter import * import tkinter.ttk import serial import time from pymyCobot.myCobot import myCobot from pymycobot.genre import Coord #====Global variable initialisation global val #Measured weight val=0.0 global iset #Scale factor, based on set values,setvalue/100 iset=5/5 global c_set #Input box to form weighing judgement criteria c_set=0.0 global action_flag action_flag=False # Set download maximum maxbyte = 100 #======myCobot initialization mc = myCobot('COM23',115200) mc.power_off() time.sleep(2) mc.power_on() time.sleep(2) print('is power on?') print(mc.is_power_on()) time.sleep(2) mc.send_angles([95.97,(-46.4),(-133.3),94.3,(-0.9),15.64],50) #Arm lift time.sleep(2) #================== #Serial port initialization try: arduino = serial.Serial("COM25", 115200 , timeout=1) except: print("Port connection failed") ReadyToStart = True #Show processing bar function def show(): mc.send_angles([95.6,(-67.2),(-130.3),101.9,(-2.2),23.11],50) #down # Set the current value of the progress bar progressbarOne['value'] = 0 # Set the maximum value of the progress bar progressbarOne['maximum'] = maxbyte # Calling the loading method loading() #Process functions def loading(): global byte global val global action_flag c_set=setvalue.get() iset=100/float(c_set) #Calculation of scaling systems byte = arduino.readline().decode('utf-8') try: if len(byte) !=0 : val= byte else: pass except: pass if (1-(float(c_set)-float(val))/float(c_set))>=0.99 and action_flag==False: #Control myCobot movement when the remaining value is less than 5% print("triger") mc.send_angles([95.97,(-46.4),(-133.3),94.3,(-0.9),15.64],50) #up action_flag=True #Make sure you only act once, unless RESET # Set the progress of the processing bar pointer progressbarOne['value'] =(1-(float(c_set)-float(val))/float(c_set))*100 #float(val)*iset #Display of implementation weighing data in label4 strvar.set(str(float(val))) # Call the loading method again after 100ms progressbarOne.after(20, loading) #reset button callback function def reset_click(): global action_flag action_flag=False #Reset flag word to prepare for the next action pass #Reset flag word to prepare for the next action def ok_click(): show() pass #UI design=========== #Main window win = tkinter.Tk() win.title("mycobot") #Create a frame form object frame = tkinter.Frame (win, borderwidth=2, width=450, height=250) # Fill the form horizontally and vertically frame. pack () #Create "Position 1" Label1 = tkinter.Label ( frame, text="Set value (g)") # Using place, set the position of the first label from the upper left corner of the form (40,40) and its size (width, height) # Note that the (x, y) position coordinates here refer to the position of the upper left corner of the label (absolute positioning is performed with the upper left corner of NW, and the default is NW) Label1.place (x=35,y=15, width=80, height=30) # set data input setvalue setvalue = tkinter.Entry (frame, text="position2",fg='blue',font=("微软雅黑",16)) #,bg='purple',fg='white') #Use the upper right corner for absolute positioning, and the position is (166, 15) away from the upper left corner of the form setvalue.place(x=166,y=15, width=60, height=30) # set tab 3 Label3 = tkinter.Label (frame, text="Real Value (g)") #,bg='green',fg='white') # Set the horizontal starting position to 0.6 times the horizontal distance of the form, the absolute vertical distance is 80, and the size is 60, 30 Label3.place(x=35,y=80, width=80, height=30) # Set label 4, place the measured weight value, the default is 0.0g strvar = StringVar() Label4 = tkinter.Label (frame, textvariable=strvar,text="0.0",fg='green',font=("微软雅黑",16)) #,bg='gray',fg='white') # Set the horizontal starting position to 0.01 times the horizontal distance of the form, the absolute vertical distance to 80, set the height to 0.5 times the form height ratio, and set the width to 80 Label4.place(x=166,y=80,height=30,width=60) progressbarOne = tkinter.ttk.Progressbar(win, length=300, mode='determinate') progressbarOne.place(x=66,y=156) # Call a function using a button control resetbutton = tkinter.Button(win, text="Reset", width=15, height=2,command=reset_click).pack(side = 'left',padx = 80,pady = 30) # Call a function using a button control okbutton = tkinter.Button(win, text="OK", width=15, height=2,command=show).pack(side = 'left', padx = 20,pady = 30) #start event loop win. mainloop()
The program is debugged step by step:
（1） Debug the electronic scale to ensure that the weighing is correct, and use weights for calibration. Make sure the datas are correct.
（2） Connect myCobot to the conveyor belt, and install a simple button at the end of myCobot, which can trigger the power supply of the conveyor belt when the arm is lowered.
（3） Joint debugging. Set the threshold in the GUI, trigger myCobot to drop the arm, and then the conveyor belt starts to run (parts are transported and fall into the hopper, weighed in real time), and trigger the myCobot to lift the arm after reaching the threshold (5g).
This is a simulated industrial application to demonstrate the control function of myCobot 280 Arduino. We transmit the weighing data to the PC through the sensor plus M5Stack Basic and indirectly feedback on the running status of the conveyor belt. Receive the weighing data to monitor the transportation of parts on the conveyor belt. When the threshold is reached, the myCobot will trigger the arm-lifting action.
The program is elementary, and the host computer only has 150 lines. The difficulty is minimal and suitable for beginners to get started. Understanding, adjusting, and acquiring the robotic arm's electrical, mechanical, and parameters.
RE: A four-axis robotic arm ideal for industrial education |myPalletizer M5Stack-esp32
I'm very sorry about that. This forum does not allow GIFs.
Watch it on hackster if you're interested in watching it!
A four-axis robotic arm ideal for industrial education |myPalletizer M5Stack-esp32
What is the 4-axis robotic arm?
In the era of Industry 4.0, where information technology is being used to promote industrial change, robotic arms are essential in industry transformation. Automated robotic arms can reduce staff labor and increase productivity using automation technology combined with artificial intelligence, voice, and vision recognition. Robotic arms are now very relevant to our lives. Most robotic arms are built like human hands to perform more tasks such as grasping, pressing, and placing. The axes of a robotic arm represent degrees of freedom and independent movement, and most robotic arms have between two and seven axes. Here I will show you a four-axis palletizing robotic arm that is suitable for introductory learning.
What is the palletizing robotic arm?
Palletizing means neatly stacking items. Palletizing robotic arms grip, transfer, and stack items according to a fixed process.
Which kind of robotic arm is more suitable? A 4-axis robotic arm? Or a 6-axis robotic arm?
Let's look at the table.
The 4-axis palletizing robotic arm can only move horizontally up and down, backward and forwards, left and right, with the end fixed towards the bottom. This is a significant limitation in terms of application and is mainly used in high-speed pick-and-place scenarios. Six-axis robotic arms are suitable for a wide range of designs and can move without dead space to reach any position within the field. We will mainly look at the four-axis palletizing robotic arm.
A video was made about the movement of two types of robotic arms.
myPalletizer 260 M5Stack
The myPalletizer robotic arm shown in the video, with M5Stack-ESP32 as the central control, is a fully wrapped lightweight 4-axis palletizing robotic arm with an overall finless design, small and compact, and easy to carry. The weight of myPalletizer is 960g, the payload is 250g, and the working radius is 260mm. I think it is designed for individual makers and educational use. With the multiple extension interfaces, we can learn machine vision with the AI Kit.
Why would we recommend this arm as an introductory 4-axis palletizing robotic arm?
There are many four-axis (4DOF) robotic arms in industry, the mainstream being represented by palletizing robotic arms. Compared to 6-axis robotic arms, myPalletizer has a more straightforward structure, fewer joints, less stretching, faster reaction times, and faster-operating efficiency and is better to use than 6-axis robotic arms. It would be quite an excellent choice with palletizing robotic arms. Let's take a look at the myPalletizer 260-M5Stack parameter.
The suitability of a robotic arm for learning requires several conditions.
The robotic arm must support multiple functions.
If this robotic arm has a mainstream structure, there will be many models of industrial robotic arms to provide a reference value.
Supporting documentation for the robotic arm is available and provides the user with basic operating instructions.
What can we learn with myPalletizer 260?
When programming the robotic arm, we will learn about forward and inverse kinematics, DH model kinematics, Cartesian coordinate systems, motors and servos, motion mechanics, programming, machine vision, etc. Here is a brief introduction to what DH model kinematics is.
First, let's talk about forward kinematics and inverse kinematics.
Determine the position and pose of the end effector given the values of the robot joint variables.
The values of the robot joint variables are determined according to the given position and attitude of the end effector.
DH Model Kinematics:
Mainly by constraining the position of the joint coordinate system, the transformation between the joint coordinate system and the coordinate system is disassembled into 4 steps, each step has only one variable/constant, thus reducing the difficulty of solving the inverse kinematics of the manipulator.
With a robotic arm, we can learn more about robotic armics.
Open Source Information
Elephant Robotics provides relevant information about myPalletizer in Gitbook. There are basic operation tutorials in mainstream programming languages, such as programming in python language, and a series of detailed introductions from the installation of the environment to the control of the robotic arm, providing beginners with a quick way to build and use the robotic arm.
More open source code on GitHub.
Artificial Intelligence Kit
We also provide an artificial intelligence kit, a robotic arm is not capable of human work, and we also need a pair of eyes (cameras) to recognize, the combination of the two can replace manual work. A camera just displays the picture it shoots, we need to program it to realize the method of color and object recognition. We used OpenCV and python to recognize and grab the color of wood blocks and recognize and grab objects.
Let's see how it works.
The Artificial Intelligence Kit is designed to give us a better understanding of machine vision and machine learning. OpenCV is a powerful machine vision algorithm. If you want to learn more about the code, you can look up the project on GitHub.
myPalletizer is an excellent robotic arm for those just starting! I hope this article will help you choose your own robotic arm. If you still want to know more, feel free to comment below. If you enjoyed this article, please give us your support, and like us, your like is our motivation to update!