Navigation

    M5Stack Community

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    1. Home
    2. ElephantRobotics
    3. Best
    • Continue chat with ElephantRobotics
    • Start new chat with ElephantRobotics
    • Flag Profile
    • Profile
    • Following
    • Followers
    • Blocks
    • Topics
    • Posts
    • Best
    • Groups

    Best posts made by ElephantRobotics

    • A four-axis robotic arm ideal for industrial education |myPalletizer M5Stack-esp32

      What is the 4-axis robotic arm?
      In the era of Industry 4.0, where information technology is being used to promote industrial change, robotic arms are essential in industry transformation. Automated robotic arms can reduce staff labor and increase productivity using automation technology combined with artificial intelligence, voice, and vision recognition. Robotic arms are now very relevant to our lives. Most robotic arms are built like human hands to perform more tasks such as grasping, pressing, and placing. The axes of a robotic arm represent degrees of freedom and independent movement, and most robotic arms have between two and seven axes. Here I will show you a four-axis palletizing robotic arm that is suitable for introductory learning.
      What is the palletizing robotic arm?
      Palletizing means neatly stacking items. Palletizing robotic arms grip, transfer, and stack items according to a fixed process.
      https://www.youtube.com/watch?v=oXiIPEDNTF8
      Which kind of robotic arm is more suitable? A 4-axis robotic arm? Or a 6-axis robotic arm?
      Let's look at the table.
      0_1668389561632_cec34f98-6f64-44b1-a890-4ce1fc7a3bd9-image.png
      The 4-axis palletizing robotic arm can only move horizontally up and down, backward and forwards, left and right, with the end fixed towards the bottom. This is a significant limitation in terms of application and is mainly used in high-speed pick-and-place scenarios. Six-axis robotic arms are suitable for a wide range of designs and can move without dead space to reach any position within the field. We will mainly look at the four-axis palletizing robotic arm.
      A video was made about the movement of two types of robotic arms.
      https://www.youtube.com/watch?v=EuAIix7_D8g
      myPalletizer 260 M5Stack
      The myPalletizer robotic arm shown in the video, with M5Stack-ESP32 as the central control, is a fully wrapped lightweight 4-axis palletizing robotic arm with an overall finless design, small and compact, and easy to carry. The weight of myPalletizer is 960g, the payload is 250g, and the working radius is 260mm. I think it is designed for individual makers and educational use. With the multiple extension interfaces, we can learn machine vision with the AI Kit.
      0_1668389663629_11ade7a8-3110-4a71-8b85-7ac5798806ee-image.png
      Why would we recommend this arm as an introductory 4-axis palletizing robotic arm?
      There are many four-axis (4DOF) robotic arms in industry, the mainstream being represented by palletizing robotic arms. Compared to 6-axis robotic arms, myPalletizer has a more straightforward structure, fewer joints, less stretching, faster reaction times, and faster-operating efficiency and is better to use than 6-axis robotic arms. It would be quite an excellent choice with palletizing robotic arms. Let's take a look at the myPalletizer 260-M5Stack parameter.
      0_1668389714850_2b2d03cc-67ab-4208-a147-f302fe9f007f-image.png
      The suitability of a robotic arm for learning requires several conditions.

      • The robotic arm must support multiple functions.

      • If this robotic arm has a mainstream structure, there will be many models of industrial robotic arms to provide a reference value.

      • Supporting documentation for the robotic arm is available and provides the user with basic operating instructions.

      What can we learn with myPalletizer 260?
      Robotics

      When programming the robotic arm, we will learn about forward and inverse kinematics, DH model kinematics, Cartesian coordinate systems, motors and servos, motion mechanics, programming, machine vision, etc. Here is a brief introduction to what DH model kinematics is.
      First, let's talk about forward kinematics and inverse kinematics.
      Forward kinematics:
      Determine the position and pose of the end effector given the values of the robot joint variables.
      Inverse kinematics:
      The values of the robot joint variables are determined according to the given position and attitude of the end effector.
      DH Model Kinematics:
      Mainly by constraining the position of the joint coordinate system, the transformation between the joint coordinate system and the coordinate system is disassembled into 4 steps, each step has only one variable/constant, thus reducing the difficulty of solving the inverse kinematics of the manipulator.
      0_1668389815645_e4d19904-aa08-44e2-b9d0-046378832cc8-image.png
      With a robotic arm, we can learn more about robotic armics.
      Open Source Information
      Elephant Robotics provides relevant information about myPalletizer in Gitbook. There are basic operation tutorials in mainstream programming languages, such as programming in python language, and a series of detailed introductions from the installation of the environment to the control of the robotic arm, providing beginners with a quick way to build and use the robotic arm.
      0_1668389923892_6949dbf2-bc4d-4b7f-9f24-7afdcdc6aee6-image.png
      Programming support
      We can program the myPalletizer in Python, C++, C#, JavaScript, Arduino, and ROS, giving the user more options to control the myPalletizer.
      0_1668392644283_b69bb958-19b9-481c-8ac1-962c1237cce6-image.png
      More open source code on GitHub.
      Artificial Intelligence Kit
      We also provide an artificial intelligence kit, a robotic arm is not capable of human work, and we also need a pair of eyes (cameras) to recognize, the combination of the two can replace manual work. A camera just displays the picture it shoots, we need to program it to realize the method of color and object recognition. We used OpenCV and python to recognize and grab the color of wood blocks and recognize and grab objects.
      Let's see how it works.
      0_1668392579355_174964e2-63cd-4cb4-97dd-71a2fc8a0422-image.png
      0_1668392676262_a28dbf31-2bb7-4d43-8c0d-484b56a5c8a2-image.png
      The Artificial Intelligence Kit is designed to give us a better understanding of machine vision and machine learning. OpenCV is a powerful machine vision algorithm. If you want to learn more about the code, you can look up the project on GitHub.

      Summary
      myPalletizer is an excellent robotic arm for those just starting! I hope this article will help you choose your own robotic arm. If you still want to know more, feel free to comment below. If you enjoyed this article, please give us your support, and like us, your like is our motivation to update!

      posted in PROJECTS
      ElephantRobotics
    • Facial Recognition and Tracking Project with mechArm M5stack

      Long time no see, I'm back.

      I'll give a report on the recent progress of the facial recognition and tracking project. For those who are new, let me briefly introduce what I am working on. I am using a desktop six-axis robotic arm with a camera mounted on the end for facial recognition and tracking. The project consists of two modules: one for facial recognition, and the other for controlling the movement of the robotic arm. I've previously discussed how the basic movement of the robotic arm is controlled and how facial recognition is implemented, so I won't go into those details again. This report will focus on how the movement control module was completed."

      Equipment

      mechArm 270M5Stack, camera
      alt text

      Details of the equipment can be found in the previous article.

      Motion control module

      Next, I'll introduce the movement control module.

      In the control module, the common input for movement control is the absolute position in Cartesian space. To obtain the absolute position, a camera and arm calibration algorithm, involving several unknown parameters, is needed. However, we skipped this step and chose to use relative displacement for movement control. This required designing a sampling movement mechanism to ensure that the face's offset is completely obtained in one control cycle and the tracking is implemented.

      Therefore, to quickly present the entire function, I did not choose to use the hand-eye calibration algorithm to handle the relationship between the camera and arm. Because the workload of hand-eye calibration is quite large.

      The code below shows how to obtain parameters from the information obtained by the facial recognition algorithm.

      Code:

      _, img = cap.read()
      # Converted to grey scale
      gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
      # Detecting faces
      faces = face_cascade.detectMultiScale(gray, 1.1, 4)
      # Drawing the outline
      for (x, y, w, h) in faces:
      if w > 200 or w < 80:
      #Limit the recognition width to between 80 and 200 pixels
      continue
      cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 3)
      center_x = (x+w-x)//2+x
      center_y = (y+h-y)//2+y
      size_face = w
      

      The obtained variables, center_x, center_y, and size_face, are used to calculate the position. Below is the code for the algorithm that processes the data to control the movement.

      run_num = 20    
      #Control cycle of 20 frames
      if save_state == False:
      # Save a start point (save_x, save_y)
      save_x = center_x
      save_y = center_y
      save_z = size_face
      origin_angles = mc.get_angles()
      print("origin point = ", save_x, save_y, origin_angles)
      time.sleep(2);
      current_coords = mc.get_coords()
      save_state = TRUE
      else:
      if run_count > run_num: # Limit the control period to 20 frames
      run_count = 0
      # Recording relative offsets
      error_x = center_x - save_x
      error_y = center_y - save_y
      error_z = size_face - save_z
      # Pixel differences are converted into actual offsets, which can be scaled and oriented
      trace_1 = -error_x * 0.15
      trace_z = -error_y * 0.5
      trace_x = -error_z * 2.0
      # x/z axis offset, note that this is open loop control
      current_coords[2] += trace_z
      current_coords[0] += trace_x
      #Restricting the Cartesian space x\z range
      if current_coords[0] < 70:
      current_coords[0] = 70
      if current_coords[0] > 150:
      current_coords[0] = 150
      if current_coords[2] < 220:
      current_coords[2] = 220
      if current_coords[2] > 280:
      current_coords[2] = 280
      # Inverse kinematic solutions
      x = current_coords[0]
      z = current_coords[2]
      # print(x, z)
      L1 = 100;
      L3 = 96.5194;
      x = x - 56.5;
      z = z - 114;
      cos_af = (L1*L1 + L3*L3 - (x*x + z*z))/(2*L1*L3);
      cos_beta = (L1*L1 - L3*L3 + (x*x + z*z))/(2*L1*math.sqrt((x*x + z*z)));
      reset = False
      # The solution is only applicable to some poses, so there may be no solution
      if abs(cos_af) > 1:
      reset = True
      if reset == True:
      current_coords[2] -= trace_z
      current_coords[0] -= trace_x
      print("err = ",cos_af)
      continue
      af = math.acos(cos_af);
      beta = math.acos(cos_beta);
      theta2 = -(beta + math.atan(z/x) - math.pi/2);
      theta3 = math.pi/2 - (af - math.atan(10/96));
      theta5 = -theta3 - theta2;
      cof = 57.295 #Curvature to angle
      move_juge = False
      # Limits the distance travelled, where trace_1 joint is in ° and trace_x/z is in mm
      if abs(trace_1) > 1 and abs(trace_1) < 15:
      move_juge = True
      if abs(trace_z) > 10 and abs(trace_z) < 50:
      move_juge = True
      if abs(trace_x) > 25 and abs(trace_x) < 80:
      move_juge = True
      if (move_juge == True):
      print("trace = ", trace_1, trace_z, trace_x)
      origin_angles[0] += trace_1
      origin_angles[1] = theta2*cof
      origin_angles[2] = theta3*cof
      origin_angles[4] = theta5*cof
      mc.send_angles(origin_angles, 70)
      else:
      #Due to the open-loop control, if no displacement occurs the current coordinate value needs to be restored
      current_coords[2] -= trace_z
      current_coords[0] -= trace_x
      else:
      # 10 frames set aside for updating the camera coordinates at the end of the motion
      if run_count < 10:
      save_x = center_x
      save_y = center_y
      save_z = size_face
      run_count += 1 
      

      In the algorithm module, after obtaining the relative displacement, how to move the arm? To ensure the movement effect, we did not directly use the coordinate movement interface provided by Mecharm, but instead added the inverse kinematics part in python. For the specific posture, we calculated the inverse solution of the robotic arm and transformed the coordinate movement into angle movement to avoid singular points and other factors that affect the Cartesian space movement. Combining the code of the facial recognition part, the entire project is completed.

      Let's look at the results together.
      https://youtu.be/dNdqrkggr9c

      Normally, facial recognition has high computational requirements. Its algorithm mechanism repeatedly calculates adjacent pixels to increase recognition accuracy. We use MechArm 270-Pi, which uses a Raspberry Pi 4B as the processor for facial recognition. The computing power of the Raspberry Pi is 400MHZ. Due to the insufficient computing power of the Raspberry Pi, we simplified the process and changed the recognition mechanism to only a few times of fuzzy recognition. In our application, the background needs to be simpler."

      Summary
      The facial recognition and robotic arm tracking project is completed.

      Key information about the project:

      ● In the case of low computing power, set a simple usage scenario to achieve smooth results

      ● Replace complex hand-eye calibration algorithms with relative position movement and use a sampling movement mechanism to ensure that the face's offset is completely obtained in one control cycle and the tracking is implemented.

      ● In python, added the inverse kinematics part, calculated the inverse solution of the robotic arm for specific postures, and converted the coordinate movement into angle movement to avoid singular points and other factors that affect the Cartesian space movement.

      Some shortcomings of the project:

      ● There are certain requirements for the usage scenario, and a clean background is needed to run successfully (by fixing the scene, many parameters were simplified)

      ● As mentioned earlier, the computing power of the Raspberry Pi is insufficient, using other control boards, such as Jetson Nano (600MHZ) or high-performance image processing computers, would run smoother.

      ● Also, in the movement control module, because we did not do hand-eye calibration, only relative displacement can be used. The control is divided into "sampling stage" and "movement stage". Currently, it is preferable to require the lens to be stationary during sampling, but it is difficult to ensure that the lens is stationary, resulting in deviation in the coordinates when the lens is also moving during sampling.

      Finally, I would like to specially thank Elephant Robotics for their help during the development of the project, which made it possible to complete it. The MechArm used in this project is a centrally symmetrical structured robotic arm with limitations in its joint movement. If the program is applied to a more flexible myCobot, the situation may be different.

      If you have any questions about the project, please leave me a message below.

      posted in PROJECTS
      ElephantRobotics
    • The Ultimate Robotics Comparison: A Deep Dive into the Upgraded Robot AI Kit 2023

      Introduction

      AI Kit (Artificial Intelligence) is mainly designed to provide a set of kits suitable for beginners and professionals to learn and apply artificial intelligence. It includes robotic arms(myCobot280-M5Stack,mechArm270-M5Stack,myPalletizer260-M5Stack) and related software, hardware, sensors, and other devices, as well as supporting tutorials and development tools. The AI Kit aims to help users better understand and apply artificial intelligence technology and provide them with opportunities for practice and innovation. The latest upgrade will further enhance the functionality and performance of AI Kit 2023, making it more suitable for various scenarios and needs, including education, scientific research, manufacturing, and more.
      alt text

      Product Description

      AI Kit is an entry-level artificial intelligence kit that combines visual, positioning, grabbing, and automatic sorting modules in one. The kit is based on the Python programming language and enables control of robotic arms through software development. With the ROS robot operating system in the Ubuntu system, a real 1:1 scene simulation model is established, allowing for quick learning of fundamental artificial intelligence knowledge, inspiring innovative thinking, and promoting open-source creative culture. This open-source kit has transparent designs and algorithms that can be easily used for specialized training platforms, robotics education, robotics laboratories, or individual learning and use.
      alt text
      Why upgrade AI Kit 2023?
      The answer to why we upgraded AI Kit 2023 is multifaceted. First, we collected extensive feedback from our users and incorporated their suggestions into the new release. The upgraded version enhances the functionality and performance of the AI Kit, making it more suitable for various scenarios and industries such as education, research, and manufacturing. The following are some of the reasons for this.

      ● Even with detailed installation instructions, installation environment setup for the AI Kit can still be challenging due to various reasons, causing inconvenience to users.

      ● The first generation of the AI Kit only has two recognition algorithms: color recognition and feature point recognition. We aim to provide a more diverse range of recognition algorithms.

      ● Due to the abundance of parts and complex device setups, the installation process of the AI Kit can be time-consuming and require a lot of adjustment.

      Based on the above 3 points, we have begun optimizing and upgrading the AI Kit.

      What aspects have been upgraded in AI Kit 2023?
      Let’s take a look at a rough comparison table of the upgrades.alt text
      The additions to the functionality can be divided into two main areas of improvement.
      One is the software upgrades, and the other is the hardware upgrades.
      Let’s start by looking at the hardware upgrades.

      Hardware upgrades

      alt text
      The AI Kit 2023 has been upgraded in several aspects, as shown in the comparison table. The updated AI Kit has a clean and minimalist style with multiple hardware upgrades, including:

      • list itemAcrylic board: upgraded in hardness and material

      • list itemCamera: upgraded to higher resolution and added a lighting lamp

      • list item External material of the camera: upgraded from plastic to metal

      • list item Suction pump: adjusted to suitable power (not too strong or weak) and upgraded interface (old models require an additional power supply interface)

      • list item Arm base: strengthened the fixing of the arm to make the arm movement more stable

      • list itemBucket/parts box: smaller in size for easier carrying and installation
        Here is a video of unboxing the AI Kit 2023.
        video
        The overall impression is still very good, let’s take a look at the software upgrades that have been made.

      Software upgrades

      ● Optimization of environment setup: In the previous version of the AI Kit, it needed to run on the ROS development environment. Based on user feedback that installing Linux, ROS, and other environments was difficult, we have loaded the program directly onto the Python environment. Compared to setting up Python and ROS environments, the former can be easily achieved.

      ● Upgrade of program UI: The previous version had a one-click start UI interface, which did not provide users with much information (similar to simple operations such as booting up). In the AI Kit 2023 program, a brand new UI interface has been designed, which can give users a refreshing feeling in terms of both aesthetics and functionality. It not only provides users with convenient operation, but also helps users to have a clearer understanding of the operation of the entire program.
      alt text
      From the figure, we can see the features of connecting the robotic arm, opening the camera, selecting recognition algorithms, and automatic startup. These designs can help users better understand the AI Kit.

      ● Breakthroughs in recognition algorithms: In addition to the original color recognition and feature point recognition algorithms, the AI Kit has been expanded to include five recognition algorithms, which are color recognition, shape recognition, ArUco code recognition, feature point recognition, and YOLOv5 recognition. The first four recognition algorithms are based on the OpenCV open-source software library. YOLOv5 (You Only Look Once version 5) is a recent popular recognition algorithm and a target detection algorithm that has undergone extensive training.
      alt text
      The expansion of recognition algorithms is also intended to provide users with their own creative direction. Users can add other recognition algorithms to the existing AI Kit 2023.

      Summary

      The upgrade of the AI Kit 2023 has been a great success, thanks to extensive user feedback and product planning. This upgrade provides users with a better learning and practical experience, helping them to master AI technology more easily. The new AI Kit also introduces many new features and improvements, such as more accurate algorithms, more stable performance, and a more user-friendly interface. In summary, the upgrade of the AI Kit 2023 is a very successful improvement that will bring better learning and practical experiences and a wider range of application scenarios to more users.

      In the future, we will continue to adhere to the principle of putting users first, continuously collect and listen to user feedback and needs, and further improve and optimize the AI Kit 2023 to better meet user needs and application scenarios. We believe that with continuous effort and innovation, the AI Kit 2023 will become an even better AI Kit, providing better learning and practical experiences for users and promoting the development and application of AI technology.

      posted in PROJECTS
      ElephantRobotics
    • Building a Smart Navigation System using myCobot M5Stack-Base and myAGV

      Introduction

      As a developer, I am currently involved in an interesting project to combine a SLAM (Simultaneous Localization and Mapping) car, myAGV, with a small six-axis robotic arm, myCobot 280 M5Stack, for research on logistics automation in education and scientific fields.

      myAGV is a small car that can perform mapping and navigation and uses Raspberry Pi 4B as the controller. It can locate and move indoors and outdoors. MyCobot280 is a small collaborative robotic arm with six degrees of freedom that can accomplish various tasks in limited space.

      My project goal is to integrate these two devices to achieve automated logistics transportation and placement. We plan to use open-source software and existing algorithms to achieve autonomous navigation, localization, mapping, object grasping, and placement functions. Through documenting the process in this article, we aim to share our journey in developing this project.

      The equipment that I am using includes:

      myAGV, a SLAM car that is capable of mapping and navigation.
      alt text

      myCobot280 M5Stack, a six-axis collaborative robotic arm with a complete API interface that can be controlled via Python.alt text

      An adaptive gripper that can be mounted as an end effector with MyCobot280, which is capable of grasping objects.alt textalt text

      Development environment:

      Ubuntu 18.04, Python 3.0+, ROS1.

      Note: myAGV is controlled by Raspberry Pi 4B, and all environment configurations are based on the configurations provided on the Raspberry Pi.

      Project

      The picture below shows the general flow of this project.alt text

      I split the function into one, a small part to be implemented independently and finally integrated together.

      myAGV

      Firstly, I am working on the functions of myAGV, to perform mapping and automated navigation. I am implementing these functions based on the information provided in the official Gitbook.

      I am using the gmapping algorithm to perform mapping. Gmapping, also known as grid-based mapping, is a well-established algorithm for generating 2D maps of indoor environments. It works by building a grid map of the environment using laser range finder data, which can be obtained from the sensors mounted on myAGV.alt text

      It's worth noting that I have tried myAGV in various scenarios, and the mapping performance is good when the environment is relatively clean. However, when the surrounding area is complex, the mapping results may not be as good. I will try to improve it by modifying the hardware or software in the future.

      The picture below shows myAGV performing automatic navigation.alt text

      During automatic navigation, myAGV still experiences deviations. Implementing navigation functionality is quite complex because the navigation conditions are quite strict. It is necessary to adjust the actual position of myAGV after enabling navigation and turn in place to determine if the position is correct. There are still many areas for improvement in navigation functionality, such as automatically locating the position of the small car on the map after enabling navigation, among other aspects.

      myCobot 280

      After handling the myAGV, the next step is to control the myCobot movement.

      Here, I use Python to control myCobot 280. Python is an easy-to-use programming language, and myCobot's Python API is also quite comprehensive. Below, I will briefly introduce several methods in pymycobot.

      time.sleep()
      Function: Pause for a few seconds (the robotic arm needs a certain amount of time to complete its movement).
      send_angles([angle_list], speed)
      Function: Send the angle of each joint and the speed of operation to the robot arm.
      set_gripper_value(value, speed)
      Function: Controls the opening and closing of the jaws, 0 is closed 100 is open, 0 to 100 adjustable
      

      Wrote a simple program to grab objects, see GIF demo.
      alt text
      Establishing communication
      After dealing with the small functions, the next step is to establish communication between myCobot and myAGV.

      • The controller of myAGV is a Raspberry Pi, which is a micro-computer (with Ubuntu 18.04 system) that can be programmed on it.

      • MyCobot 280 M5Stack needs to be controlled by commands sent from a computer.

      Based on the above conditions, there are two ways to establish communication between them:

      • Serial communication: directly connect them using a TypeC-USB data cable (the simplest and most direct method).

      • Wireless connection: myCobot supports WIFI control, and commands can be sent by entering the corresponding IP address (more complicated and communication is not stable).
        Here, I choose to use serial communication and directly connect them with a data cable.
        alt text
        Here I recommend a software called VNC Viewer, which is a cross-platform remote control software. I use VNC to remotely control myAGV, which is very convenient because I don't have to carry a monitor around.

      If you have any better remote control software, you can leave a comment below to recommend it to me.

      Let's see how the overall operation works.alt text

      Summary

      In this project, only simple SLAM-related algorithms are used. The navigation algorithm needs to be further optimized to achieve more accurate navigation. As for the usage of myCobot, it is a relatively mature robotic arm with a convenient interface, and the end effectors provided by the Elephant Robotics can meet the requirements without the need to build a gripper for the project.

      There are still many aspects of the project that need to be optimized, and I will continue to develop it in the future. Thank you for watching, and if you have any interest or questions, please feel free to leave a comment below.

      posted in PROJECTS
      ElephantRobotics
    • Smart Applications of Holography and Robotic Arms myCobot 320 M5Stack-Basic

      alt text
      Introduction
      Do you think this display is innovative and magical? Actually, this is a technology called holographic projection. Holographic technology has become a part of our daily lives, with applications covering multiple fields. In the entertainment industry, holographic technology is used in movie theaters, game arcades, and theme parks. Through holographic projection technology, viewers can enjoy more realistic visual effects, further enhancing their entertainment experience. In the medical field, holographic technology is widely used in medical diagnosis and surgery. By presenting high-resolution 3D images, doctors can observe the condition more accurately, improving the effectiveness of diagnosis and surgery. In the education field, holographic technology is used to create teaching materials and science exhibitions, helping students better understand and master knowledge. In addition, holographic technology is also applied in engineering manufacturing, safety monitoring, virtual reality and other fields, bringing more convenience and innovation to our lives. It is foreseeable that with the continuous development of technology and the continuous expansion of application scenarios, holographic technology will play a more important role in our future lives.

      alt text
      (Images from the internet)

      The main content of this article is to describe how to use myCobot320 M5Stack 2022 and DSee-65X holographic projection equipment to achieve naked-eye 3D display.

      This project is jointly developed by Elephant Robotics and DSeeLab Hologram.

      DSee-65X holographic equipment:
      We take a brief look at how the holographic influence is generated.The holographic screen is a display device that uses the technical principle of persistence of vision (POV) (after-image of moving objects) to achieve 3D holographic visual enhancement effect, air suspension, holographic stereo display effect by rotating imaging with ultra-high density LED light,break the limitation and boring of traditional flat display, real-time synchronization and interactive development can also be carried out, leading the new trend of commercial holographic display industry.

      DSee-65X is a product of DSee Lab Hologram, a company that specializes in holographic technology.

      DSee-65X: high resolution, high brightness, supports various content formats, WiFi connection, APP operation, cloud remote cluster control, unlimited splicing for large screen display, 30,000 hours of continuous operation.

      Here is a video introduction of DSee-65X.
      https://youtu.be/UDXlNgjwQ8c
      alt text

      myCobot 320 M5Stack 2022

      myCobot 320 M5Stack is an upgraded version of the myCobot 280 product, mainly suitable for makers and researchers, and can be customized according to user needs through secondary development. It has three major advantages of usability, safety, and economy, with a sophisticated all-in-one design. The myCobot 320 weighs 3kg, has a payload of 1kg, a working radius of 350mm, and is relatively compact but powerful. It is easy to operate, can collaborate with humans, and work safely. The myCobot 320 2022 is equipped with a variety of interfaces and can quickly adapt to various usage scenarios.
      alt text

      Here is a video presentation of the myCobot 320 M5Stack 2022
      https://youtu.be/B14BS6I-uS4

      Introduction of the two devices complete, next step is to combine the holographic device with the robotic arm to work together. The operation of this project is very simple and can be divided into two steps:

      1. Install the DSee-65X at the end of myCobot 320.

      2. Control myCobot 320 to perform a beautiful trajectory to display the holographic image.

      Project

      Installation

      DSee-65X and myCobot320 M5Stack 2022 are products from two different companies. When we received them, we found that we couldn't directly install the holographic device on the end of myCobot320. Therefore, we needed to modify the holographic device.

      This is the structure at the end of myCobot320
      alt text

      This is the DSee-65X

      alt text
      According to the provided information, we added a board as a bridge between them for adaptation.
      alt text

      The maximum load of myCobot320 can reach up to 1kg, so this modification is completely feasible for it.

      Controlling Robotics Arm
      Our goal is to design a trajectory for the myCobot 320 robotic arm that ensures an unobstructed view of the hologram display.
      alt text
      The myCobot 320 has a rich interface and supports Python, C++, C#, JavaScript, Arduino, ROS, and more. Next, we will program it. Here we use a very easy-to-learn method. The method is to use myBlockly software for programming. myBlockly is a graphical programming software that allows code to be written by drag and drop.
      alt text

      The code in the picture is a graphic code for the trajectory of the myCobot 320.

      myBlockly's underlying code is written in Python, so we can also directly use Python code to control the robotic arm. The following is an example of Python code:

      import time
      from pymycobot.mycobot import MyCobot
      
      mc = MyCobot('/dev/ttyUSB0')
      mc.set_speed(60)
      
      # move to a home position
      mc.send_angles([0, -90, 90, 0, 0, 0], 80)
      time.sleep(1)
      
      # move to a new position
      mc.send_angles([0, -90, 90, 0, 0, 30], 80)
      time.sleep(1)
      
      # move to another position
      mc.send_angles([0, -90, 90, 0, 30, 30], 80)
      time.sleep(1)
      
      # move to a final position
      mc.send_angles([0, -90, 90, 0, 30, 0], 80)
      time.sleep(1)
      
      mc.release_all_servos()
      

      Briefly explain how to use the DSee-65X.

      DSee-65X has its own dedicated LAN. By connecting your computer to the same LAN, you can launch the software to make the holographic device work.
      alt text
      alt text

      Summary

      The whole process seems to be just a display of holographic imaging device with the robotic arm serving as a support. However, we can imagine more possibilities by using holographic projection technology to project 3D models or images into space and then capturing users' movements or gestures with sensors or cameras to control the robotic arm. For example, in manufacturing or logistics industries, combining robotic arms with holographic technology can achieve more efficient production and logistics operations. In the medical field, using robotic arms and holographic technology can achieve more precise surgery and treatment. In short, combining robotic arms and holographic technology can bring more intelligent and precise control and operation methods for various application scenarios, improving production efficiency and work quality.

      These are all areas that require creative minds like yours to put in effort and develop! Please feel free to leave your ideas in the comments below and let's discuss together how to create more interesting projects.

      posted in PROJECTS
      ElephantRobotics
    • Exploring the Advantages and Differences of Different Types of Robotic Arms in AI Kit

      This article is primarily about introducing 3 robotic arms that are compatible with AI Kit. What are the differences between them?
      If you have a robotic arm, what would you use it for? Simple control of the robotic arm to move it around? Repeat a certain trajectory? Or allow it to work in the industry to replace humans? With the advancement of technology, robots are frequently appearing around us, replacing us in dangerous jobs and serving humanity. Let's take a look at how robotic arms work in an industrial setting.
      alt text

      Introduction

      what is AI Kit?

      The AI Kit is an entry-level artificial intelligence Kit that integrates vision, positioning, grasping, and automatic sorting modules. Based on the Linux system and built-in ROS with a 1:1 simulation model, the AI Kit supports the control of the robotic arm through the development of software, allowing for a quick introduction to the basics of artificial intelligence.

      alt text
      Currently, AI Kit can achieve color and image recognition, automatic postioning and sorting. This Kit is very helpful for users who are new to robotic arms and machine vision, as it allows you to quickly understand how artificial intelligence projects are built and learn more about how machine vision works with robotic arms.

      Next, let's briefly introduce the 3 robotic arms that are compatible with the AI Kit.
      The AI Kit can be adapted for use with myPalletizer 260 M5Stack, myCobot 280 M5Stack and mechArm 270 M5Stack.All three robotic arms are equipped with the M5Stack-Basic and the ESP32-ATOM.

      Robotic Arms

      myPalletizer 260

      myPalletizer260 is a lightweight 4-axis robotic arm, it is compact and easy to carry. The myPalletizer weighs 960g, has a 250g payload, and has a working radius of 260mm. It is explicitly designed for makers and educators and has rich expansion interfaces.
      alt text

      mechArm 270

      mechArm 270 is a small 6-axis robotic arm with a center-symmetrical structure (like an industrial structure). The mechArm 270 weighs 1kg with a payload of 250g, and has a working radius of 270mm. As the most compact collaborative robot, mechArm is small but powerful.
      alt text
      myCobot 280
      myCobot 280 is the smallest and lightest 6-axis collaborative robotic arm (UR structure) in the world, which can be customized according to user needs. The myCobot has a self-weight of 850g, an effective load of 250g, and an effective working radius of 280mm. It is small but powerful and can be used with various end effectors to adapt to various application scenarios, as well as support the development of software on multiple platforms to meet the needs of various scenarios, such as scientific research and education, smart home, and business pre R&D.
      alt text
      Let's watch a video to see how AI Kit works with these 3 robotic arms.
      https://youtu.be/kgJeSbo9XE0

      Project Description

      The video shows the color recognition and intelligent sorting function, as well as the image recognition and intelligent sorting function. Let's briefly introduce how AI Kit is implemented (using the example of the color recognition and intelligent sorting function).

      This artificial intelligence project mainly uses two modules:

      ●Vision processing module

      ●Computation module (handles the conversion between eye to hand)

      Vision processing module

      OpenCV (Open Source Computer Vision) is an open-source computer vision library used to develop computer vision applications. OpenCV includes a large number of functions and algorithms for image processing, video analysis, deep learning based object detection and recognition, and more.

      We use OpenCV to process images. The video from the camera is processed to obtain information from the video such as color, image, and the plane coordinates (x, y) in the video. The obtained information is then passed to the processor for further processing.
      Here is part of the code to process the image (colour recognition)

      # detect cube color
      def color_detect(self, img):
      # set the arrangement of color'HSV
      x = y = 0
      gs_img = cv2.GaussianBlur(img, (3, 3), 0) # Gaussian blur
      # transfrom the img to model of gray
      hsv = cv2.cvtColor(gs_img, cv2.COLOR_BGR2HSV)
      for mycolor, item in self.HSV.items():
      redLower = np.array(item[0])
      redUpper = np.array(item[1])
      # wipe off all color expect color in range
      mask = cv2.inRange(hsv, item[0], item[1])
      # a etching operation on a picture to remove edge roughness
      erosion = cv2.erode(mask, np.ones((1, 1), np.uint8), iterations=2)
      # the image for expansion operation, its role is to deepen the color depth in the picture
      dilation = cv2.dilate(erosion, np.ones(
      (1, 1), np.uint8), iterations=2)
      # adds pixels to the image
      target = cv2.bitwise_and(img, img, mask=dilation)
      # the filtered image is transformed into a binary image and placed in binary
      ret, binary = cv2.threshold(dilation, 127, 255, cv2.THRESH_BINARY)
      # get the contour coordinates of the image, where contours is the coordinate value, here only the contour is detected
      contours, hierarchy = cv2.findContours(
      dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
      if len(contours) > 0:
      # do something about misidentification
      boxes = [
      box
      for box in [cv2.boundingRect(c) for c in contours]
      if min(img.shape[0], img.shape[1]) / 10
      < min(box[2], box[3])
      < min(img.shape[0], img.shape[1]) / 1
      ]
      if boxes:
      for box in boxes:
      x, y, w, h = box
      # find the largest object that fits the requirements
      c = max(contours, key=cv2.contourArea)
      # get the lower left and upper right points of the positioning object
      x, y, w, h = cv2.boundingRect(c)
      # locate the target by drawing rectangle
      cv2.rectangle(img, (x, y), (x+w, y+h), (153, 153, 0), 2)
      # calculate the rectangle center
      x, y = (x*2+w)/2, (y*2+h)/2
      # calculate the real coordinates of mycobot relative to the target
      if mycolor == "red":
      self.color = 0
      elif mycolor == "green":
      self.color = 1
      elif mycolor == "cyan" or mycolor == "blue":
      self.color = 2
      else:
      self.color = 3
      if abs(x) + abs(y) > 0:
      return x, y
      else:
      return None
      

      Just obtaining image information is not enough, we must process the obtained data and pass it on to the robotic arm to execute commands. This is where the computation module comes in.

      Computation module

      NumPy (Numerical Python) is an open-source Python library mainly used for mathematical calculations. NumPy provides many functions and algorithms for scientific calculations, including matrix operations, linear algebra, random number generation, Fourier transform, and more. We need to process the coordinates on the image and convert them to real coordinates, a specialized term called eye to hand. We use Python and the NumPy computation library to calculate our coordinates and send them to the robotic arm to perform sorting.

      Here is part of the code for the computation.

      while cv2.waitKey(1) < 0:
      # read camera
      _, frame = cap.read()
      # deal img
      frame = detect.transform_frame(frame)
      if _init_ > 0:
      _init_ -= 1
      continue
      # calculate the parameters of camera clipping
      if init_num < 20:
      if detect.get_calculate_params(frame) is None:
      cv2.imshow("figure", frame)
      continue
      else:
      x1, x2, y1, y2 = detect.get_calculate_params(frame)
      detect.draw_marker(frame, x1, y1)
      detect.draw_marker(frame, x2, y2)
      detect.sum_x1 += x1
      detect.sum_x2 += x2
      detect.sum_y1 += y1
      detect.sum_y2 += y2
      init_num += 1
      continue
      elif init_num == 20:
      detect.set_cut_params(
      (detect.sum_x1)/20.0,
      (detect.sum_y1)/20.0,
      (detect.sum_x2)/20.0,
      (detect.sum_y2)/20.0,
      )
      detect.sum_x1 = detect.sum_x2 = detect.sum_y1 = detect.sum_y2 = 0
      init_num += 1
      continue
      # calculate params of the coords between cube and mycobot
      if nparams < 10:
      if detect.get_calculate_params(frame) is None:
      cv2.imshow("figure", frame)
      continue
      else:
      x1, x2, y1, y2 = detect.get_calculate_params(frame)
      detect.draw_marker(frame, x1, y1)
      detect.draw_marker(frame, x2, y2)
      detect.sum_x1 += x1
      detect.sum_x2 += x2
      detect.sum_y1 += y1
      detect.sum_y2 += y2
      nparams += 1
      continue
      elif nparams == 10:
      nparams += 1
      # calculate and set params of calculating real coord between cube and mycobot
      detect.set_params(
      (detect.sum_x1+detect.sum_x2)/20.0,
      (detect.sum_y1+detect.sum_y2)/20.0,
      abs(detect.sum_x1-detect.sum_x2)/10.0 +
      abs(detect.sum_y1-detect.sum_y2)/10.0
      )
      print ("ok")
      continue
      # get detect result
      detect_result = detect.color_detect(frame)
      if detect_result is None:
      cv2.imshow("figure", frame)
      continue
      else:
      x, y = detect_result
      # calculate real coord between cube and mycobot
      real_x, real_y = detect.get_position(x, y)
      if num == 20:
      detect.pub_marker(real_sx/20.0/1000.0, real_sy/20.0/1000.0)
      detect.decide_move(real_sx/20.0, real_sy/20.0, detect.color)
      num = real_sx = real_sy = 0
      else:
      num += 1
      real_sy += real_y
      real_sx += real_x
      

      The AI Kit project is open source and can be found on GitHub.

      Difference

      After comparing the video, content, and code of the program, it appears that the 3 robotic arms have the same framework and only need minor modifications to the data to run successfully.

      There are roughly two main differences between these 3 robotic arms.

      One is comparing the 4- and 6-axis robotic arms in terms of their practical differences in use (comparing myPalletizer to mechArm/myCobot).

      Let's look at a comparison between a 4-axis robotic arm and a 6-axis robotic arm.
      alt text
      From the video, we can see that both the 4-axis and 6-axis robotic arms have a sufficient range of motion in the AI Kit's work area. The main difference between them is that myPalletizer has a simple and quick start process with only 4 joints in motion, allowing it to efficiently and steadily perform tasks, while myCobot requires 6 joints, two more than myPalletizer, resulting in more calculations in the program and a longer start time (in small scenarios).

      In summary, when the scene is fixed, we can consider the working range of the robotic arm as the first priority when choosing a robotic arm. Among the robotic arms that meet the working range, efficiency and stability will be necessary conditions. If there is an industrial scene similar to our AI Kit, a 4-axis robotic arm will be the first choice. Of course, a 6-axis robotic arm can operate in a larger space and can perform more complex movements. They can rotate in space, while a 4-axis robotic arm cannot do this. Therefore, 6-axis robotic arms are generally more suitable for industrial applications that require precise operation and complex movement.

      alt text
      The second thing is that both are 6-axis robotic arms, and their main difference is the structure. mechArm is a centralized symmetrical structure robotic arm, and myCobot is a UR structure collaborative robotic arm. We can compare the differences between these two structures in actual application scenarios.

      Here are the specifications of the two robotic arms.
      alt text
      The difference in structure between these two leads to a difference in their range of motion. Taking mechArm as an example, the centrally symmetrical structure of the robotic arm is composed of 3 pairs of opposing joints, with the movement direction of each pair of joints being opposite. This type of robotic arm has good balance and can offset the torque between joints, keeping the arm stable.
      alt text
      Shown in the video, mechArm is also relatively stable in operation.

      You may now question, is myCobot not useful then? Of course not, the UR structure robot arm is more flexible and can achieve a larger range of motion, suitable for larger application scenarios. myCobot's more important point is that it is a collaborative robot arm, it has good human-robot interaction ability and can collaborate with humans for work. 6-axis collaborative robot arms are usually used in logistics and assembly work on production lines, as well as in medical, research, and education fields.

      Summary

      As stated at the beginning, the difference between these 3 robotic arms included in the AI Kit is essentially how to choose a suitable robotic arm to use. If you are choosing a robotic arm for a specific application, you will need to take into consideration factors such as the working radius of the arm, the environment in which it will be used, and the load capacity of the arm.

      If you are looking to learn about robotic arm technology, you can choose a mainstream robotic arm currently available on the market to learn from. MyPalletizer is designed based on a palletizing robotic arm, mainly used for palletizing and handling goods on pallets. mechArm is designed based on a mainstream industrial robotic arm, which has a special structure that keeps the arm stable during operation. myCobot is designed based on a collaborative robotic arm, which is a popular arm structure in recent years, capable of working with humans and providing human strength and precision.

      That's all for this post, if you like this post, please leave us a comment and a like!

      We have published an article detailing the differences between mechArm and myCobot.Please click on the link if you are interested in learning more.

      posted in PROJECTS
      ElephantRobotics
    • myAGV | SLAM-based autonomous localization and navigation for robots

      Program: Map creation and automatic navigation with myAGV
      Equipment:
      1 myAGV:
      myAGV is an autonomous navigation smart vehicle from Elephant Robotics. It uses the competition-level Mecanum wheels and a full wrap design with a metal frame. There are two built-in slam algorithms to meet the learning of mapping and navigation directions.
      0_1664506213493_agv.jpg
      2 PC:
      A normal computer.

      The autonomous robot positioning and navigation technology include two parts: position&map creation (SLAM), and path planning&motion control. SLAM only completes the robot's positioning and map creation.

      The main solution for implementing localization and navigation technology is SLAM, path planning, and motion control.

      The process that a robot describes the environment and recognizes the environment mainly depends on the map. It uses environmental maps to describe its current environmental information and adopts different forms of map descriptions with the algorithm and sensor differences used.
      We use the gmapping algorithm in the SLAM algorithm. The gmapping algorithm builds the map based on the raster map.
      Gmapping:
      Gmapping is a SLAM algorithm based on 2D LiDAR using RBPF (Rao-Blackwellized Particle Filters) algorithm to complete 2D raster map construction. 
       Advantages:
      gmapping can build indoor environment maps in real-time, with less computation in small scenes and higher map accuracy, and requires less LIDAR scanning frequency. 
       Disadvantages:
      As the environment grows larger, the memory and computation required to build the map become huge, so gmapping is unsuitable for large scene composition. An intuitive feeling is that for a range of 200x200 meters, if the raster resolution is 5cm and each raster occupies one byte of memory, then each particle carries 16M of RAM for the map, and if it is 100 particles, it is 1.6G of RAM.
      Raster Map:
      The most common way for robots to describe the environment is Grid Map or Occupancy Map, which divides the environment into a series of grids, where each grid is given a possible value indicating the probability that the grid will be occupied.
      0_1664506385274_1.png

      Start the project:
      Because myAGV has a built-in Raspberry Pi computer, controlling myAGV requires leaving the keyboard and mouse, and monitor, so a computer is needed to control myAGV's computer. We use the VNC remote control.
      VNC:
      VNC (Virtual Network Console) stands for Virtual Network Console. It is an excellent remote control tool software developed by the famous AT&T's European research labs.VNC is a free open source software based on UNIX and Linux operating systems with powerful remote control capabilities, efficient and practical, and its performance is comparable to any remote control software in Windows and MAC. 
      0_1664506459759_23.png
      Place the myAGV on a horizontal surface.
      0_1664520988647_555.png
      The launch file will start the odometer and IMU sensor of myAGV.
      Enter the command in the terminal

      roslaunch myagv_odometry myagv_active.launch
      

      0_1664506614777_建图1.png
      After turning on the odometer and IMU sensors, we then turn on the radar and gmapping algorithms to start the map creation.
      Enter the command in the terminal:

      roslaunch myagv_navigation myagv_slam_laser.launch
      

      0_1664506673304_建图2.png
      This is the page we just started, then we move myAGV and we can draw the map out. To control myAGV movement, Elephant Robotics gives us keyboard control.
      0_1664506699010_12346.jpg
      Saved the map.
      Enter the command in the terminal

      rosrun map_server map_saver
      

      0_1664506764839_建图3.png
      The next step is to make myAGV able to navigate automatically on the map, where myAGV can avoid obstacles to its destination (navigation) automatically by clicking on it.
      First, we load the map and modify the path into our startup file.
      Path planning + motion control:
      Movement planning is a big concept. For example, the movement of robotic arms, the flight of vehicles, and the path planning of myAGVs we are talking about here, all are in motion planning.
      Let's talk about motion planning for these types of wheeled robots. The basic capability required here is path planning, that is, the ability to perform what is generally called target point navigation after completing SLAM. In short, it means planning a path from point A to point B, and then make the robot move over it.

      1. Global Planning.
      To achieve this process, motion planning has to implement at least two levels of modules, and one is called global planning, which is a bit like our car navigator. It needs to pre-plan a route on the map and also the current robot's position. This is provided by our SLAM system. The industry typically uses an algorithm called A* to implement this process, which is a heuristic search algorithm that is excellent. It is mostly used in games, such as real-time strategy games like Starcraft and Warcraft, which use this algorithm to calculate the movement trajectory of units.

      2. Partial Planning
      Of course, just planning the path is not enough. There are many unexpected situations in reality. For example, a small child is in the way, so the original path needs to be adjusted. Of course, sometimes, this adjustment does not require a recalculation of the global path, and the robot may be able to make a slight detour. In this case, we need another level of planning called local planning. It may not know where the robot will end up, but it is particularly good at getting around the obstacles in front of it.
      Next, we start the program and open the saved maps and the autopilot function.
      Enter the command in the terminal

      roslaunch myagv_navigation navigation_active.launch
      

      Use keyboard control to make myAGV rotate in place for positioning. After the positioning is completed and the point cloud converges, proceed to the next navigation step.
      0_1664507046384_建图4.jpg
      Click "2D Nav Goal" on the top, click the point on the map you want to reach, and myAGV will set off towards the target point. You can also see a planned path of myAGV between the starting point and the target point in RVIZ, and myAGV will move along the route to the target point.
      0_1664507107724_建图5.jpg
      This is the end of the project
      Summary:
      The navigation demonstrated at present is only a relatively basic situation. There are many mobile robots on the market, such as sweeping robots. It needs to plan paths according to different environments, which is more complicated. For the problem of path planning in different environments, there is a unique called space coverage, which also proposes a lot of algorithms and theories.
      There is still a lot of work to be done to navigate the SLAM algorithm in the future.
      If you have good ideas to express welcome to discuss with us in the comments below ~
      About Elephantrobotics:
      Home
      GitHub
      Gitbook for myAGV

      posted in General
      ElephantRobotics
    • RE: My first try with the little six-axis robotic arm| mechArm 270-M5Stack

      It looks great.Looking forward to your subsequent and more exciting projects about mechArm.

      posted in PROJECTS
      ElephantRobotics
    • Robot can be painter | myCobot Pro 600 drawing

      Background:

      When it comes to the robotic arm, the first reaction of most people is the industrial robotic arm is doing the work of the assembly line in the factory, but it is not. The robotic arm is like an accurate arm. The robotic arm can perform a number of works in daily life. For example, it can latte art, playing chess with human beings, restaurant ushers, massage, ultrasound, drawing, etc. The robotic arm has long been integrated into our daily life and will appear in various forms in the future.

      Recently, I’ve seen a series of videos on Youtube and Twitter about writing machines and machine drawing, so I thought I’d use the myCobot Pro 600 around me to try it out and see if I could draw with a robotic arm.

      What is myCobot Pro 600?

      myCobot Pro 600 is a robotic arm developed for education and commercial use with a Raspberry Pi microprocessor and embedded RoboFlow visual programming primitives from Elephant Robotics.myCobot Pro 600 uses an industrial grade servo, which is comparable to an industrial robotic arm. It is very good in this aspect of stability and can be a painter.
      (What is RobotFlow is described below.)
      0_1665741802296_111.1.jpg

      Plan:

      The first step is to get the outline of a photo, transform it to get the Cartesian coordinates of the outline, transfer it to the robotic arm to execute along the path, and then the robotic arm will be able to draw.
      Projects
      0_1665741521390_Projects.png

      Key points:

      1 Get the path/contour map of the image and convert it to Cartesian coordinates.

      2 The recognition of the pen up point and down point positions in the contour of the image.

      Projects:

      1 Inkscape
      Choose software that can draw graphics and can convert images into outlines/paths. Here we recommend Inkscape, which allows us to develop plugins on its software.

      Inkscape is a free and open source vector graphics editing software, and can fully comply with and support XML, SVG and CSS, and other open standard formats. It also has cross-platform support for multiple operating systems, windows, macOS, Linux, UNIX, etc.
      Inkscape
      0_1665741589383_inkscape.png
      2 Unicorn
      Unicorn is a lightweight multi-platform, multi-architecture CPU emulator framework.
      0_1665741604585_unicorn.png
      We used unicorn simulation to write a myCobot Cartesian coordinate transformation to combine the path/contour of the graph to simulate and finally generate the NGC file.
      Contents of an NGC file

      Elephant Robotics)
      G21 G94 G64 G40 (metric ftw)
      G90 (absolute mode)
      G92.2
      G4 P1 (wait 1s)
      G38.3F1000X38.18 Y-156.72 Z-60.01 A-121.90 B-31.55 C99.32
      G4 P1 (wait 1s)
      G01F1000X-505.03 Y-177.45 Z61.41 A67.37 B60.32 C-132.00
      G4 P1 (wait 1s)
      G92 X0 Y0 Z0 A0 B0 C0
      G4 P1 (wait 1s)
      G01 X0 Y0 Z20.00 A0 B0 C0
      G4 P1 (wait 1s)
      
      G0 Z0 (pen down)
      G4 P1 (wait 1ms)
      G0 Z10 (pen up)
      G4 P1 (wait 1ms)
      
      (Polyline consisting of 97 segments.)
      G1 X145.82 Y229.44 F2000.00
      G0 Z0.00 (pen down)
      G4 P1 (wait 1ms)
      G1 X151.53 Y225.73 F2000.00
      G1 X157.58 Y220.95 F2000.00
      ...
      

      (unicorn is an open source project, if you are interested, please refer to github for more)

      3 RobotFlow
      RobotFlow is an operating system for collaborative robotic arms developed specifically for Elephant Robotics with a user-friendly UI. It allows users to realize its functions through simple operations even if they do not know the underlying principles well. Finally, the NGC file generated from the image, transferred to RobotFlow and run, can realize the robotic arm drawing.
      myCobot pro 600 painting
      0_1665741537235_myCobot pro 600 painting.png

      0_1665741547002_222.png

      Process

      The project went as planned, but some problems arose.
      video:
      https://www.youtube.com/shorts/W4xHpRlfpOs

      Problems:

      1 Error in the NGC file when the outline/path of the image is generated.

      2 When drawing, some places were not drawn. The reason for this is attributed to the end unit, as the trajectory of the run is fixed, too much drop in the run to some places will cause the pen to be crushed.

      3 Some coordinates of myCobot exceed the limit and cannot be reached, so the program cannot proceed.

      Solutions:

      1 Select the picture as much as possible, the outline is a relatively clear picture, so that the generated NGC file will be able to achieve the drawing.

      2 Make the end device flexible so that it will not be damaged even when the pen is pressed.

      3 When setting the initial position, try to set the initial position with a control range from the limit.

      Show video:
      https://www.youtube.com/watch?v=QNvH5mAz4wU

      Summary:

      For some related information (plugins, code) in the above article, you can leave a comment below and I will share it with you. If you have good ideas to express welcome to discuss with us in the comments below ~

      More information:
      Home | Elephantrobotics
      Gitbook | Elephantrobotics
      GitHub | Open-Source

      posted in General
      ElephantRobotics
    • Desktop Dual-arm Cobot, myBuddy 280 Focuses on Education and Research with Various Functions

      Background

      With the development of modern industry and the advancement of science and technology, people's requirements for industry, medical care, and service levels continue to increase. Single-arm robots cannot meet the requirements. Dual-arm robots should be used to meet the needs of complexity, intelligence, and flexibility of tasks. And live. The dual-arm robot is not a simple combination of two robotic arms. In addition to their respective control goals, they also need to satisfy mutual coordinated control and adaptability to the environment. This high complexity makes the operation of dual-arm robots more demanding—advanced integrated systems, high-level planning and reasoning, and adjustable control methods.
      Dual-arm collaborative robots are the inevitable trend in the future of robotics.

      Introduction

      myBuddy 280 is Elephant Robotics' first dual-arm collaborative robot, powered by Raspberry Pi, and is a service robot - a dual-arm 13-axis humanoid collaborative robot. myBuddy 280 has a single arm with a working radius of 280mm and a maximum payload of 250g. It has a 7" interactive display and 2-megapixel HD cameras. It can be adapted to the needs of different applications.
      alt text
      alt text

      Functions

      Excellent algorithm control

      Dual-arm robots have more apparent advantages over single-armed robots. A dual-arm robot can operate a single-armed double simultaneously, with higher total power, or it can reach two different positions simultaneously for separate operations, or even multiple robots can physically achieve object transfer. The trajectory of a robotic arm is ultimately single and requires human optimization to design algorithms for optimal trajectory calculation. This approach is quite complex to implement because of several factors, such as redundant kinematics, collision avoidance, unclear possibilities for performing tasks, complex objective functions, etc.
      With superior algorithms, myBuddy 280 can respond to commands as fast as 30ms, and with anti-collision detection, it can work safely with people.

      A more complete secondary development environment

      • Ultra-complete python control interface

        ■ Provides 100+ control interfaces for secondary application development or self-interference algorithm research.
        ■ Open interfaces for joint angle, speed control, and robot coordinate control make management more accessible and user-friendly.
        ■ Supports separate controls for left and right arm and waist, allowing more control at your fingertips.
        ■ Programming examples are provided to enable rapid deployment of scenario applications

             Sends a single joint angle to the robot arm.
             send_angle(id, joint, angle, speed)
             id - 1/2/3 (left arm/right arm/waist)
             joint - 1~ 6 (Corresponding to each joint)
             angle - (-180 ~ 180)Different angles have different limits, please check the product parameters for details
             speed – 1 ~ 100 (The higher the value, the faster the arm is moving)
             # Get the angle of a single joint
             get_angle(id, joint_id)
             id - 1/2/3 (left arm/right arm/waist)
             joint_id - 1~7 (7 is grapper)
             # Sending the arcs of all joints of the specified robot arm to the arm
             send_radians(id, radians, speed)
             id – 1/2(left arm/right arm)。
             radians – The radian values are stored as a list
             (List[float]),The length of the list is 6
             speed - 0 ~ 100(The higher the value, the faster the arm is moving)
             # There are many more functions, here is an example of their use
             from pymycobot.mybuddy import MyBuddy
             import time
              #MyBuddy('port',baud)
              mc = MyBuddy("/dev/ttyACM0",115200)
              # Send angles to the six joints of the left arm
             mc.send_angles(1, [0, 0, 0, 0, 0, 0], 50)
             time.sleep(3)
             # Send the angle to the first joint of the right arm
             mc.send_angle(2, 1, 90, 50)
             time.sleep(2)
      

      code on GitHub

      • ROS robot control system support

        ■ With RVIZ, RVIZ can display images, models, paths, and other information, complete with visual rendering, making it easier for developers to understand the meaning of the data.
        alt text
        ■ With MoveIt, among other things, motion planning, collision detection, kinematics, 3D perception, and manipulation control. When users develop paths and encounter different situations that require constraints, the functions of MoveIt can be helpful.alt text

      • Self-developed software support
        ■ myBlockly: myBlockly is visual modular programming software that belongs to the graphic programming language. Like Scratch, it is an excellent software for getting started with myBuddy 280 quickly.
        alt text
        ■myStudio: myStudio is a one-stop platform for the use of robotic arms. It offers firmware updates, driver installation, and tutorials on how to use the robot arm.alt text

      • Configuration
        ■With 13 high-performance brushless DC servos, a seven-inch interactive display can be used for image display and touch control.
        ■Two built-in 2-megapixel and OpenCV compiled environments for rapid deployment of machine vision development.
        ■The LEGO end unit interface allows users to use 3D-printed accessories for various scenarios.
        alt text

      Summary

      Dual-arm collaborative robots will dominate the future robotics landscape, and you could be designing more creative projects with myBuddy 280! Please leave your comments below and share them with us to start the journey of dual-arm collaborative robots!
      Learn more about us:
      Home | Elephant Robotics
      GitHub | Elephant Robotics
      Shop | Elephant Robotics

      posted in PROJECTS
      ElephantRobotics
    • RE: Robot can be painter | myCobot Pro 600 drawing

      1 The result of the test was that the pattern was needed for a more pronounced outline.
      2 The working radius of the myCobot Pro 600 is within a range of 600 mm.
      3 This project is integrated into RobotFlow Elephant Robotics. The code is available for reference.

      posted in General
      ElephantRobotics
    • myCobot VS mechArm | Find your preferred desktop 6-axis robotic arm

      Background:

      In the future, it is undoubtedly that robots will replace human labor. Nowadays, the industrialization of robotic arms has become more mature, and more and more people are interested in robotic arms. Before getting to know about industrial robots, learning from educational robots is the most effective way. There are many robotic arms for education and science research, how do we choose in the robotics market?
      Here we will provide two desktop six-axis robotic arms which are the preferred choices for individual developers who are new to robotics and want to create quick prototypes for personal or industrial use. And we will compare these two robots and help you find the best one for your needs.
      First, Let’s introduce the differences between the industrial robotic arm and the collaborative robotic arm

      Industrial robotic arm

      As the name suggests, industrial robotic arms can replace humans working in factories, which can reduce production costs, improve productivity, and replace humans in dangerous positions.

      Collaborative robotic arm

      The collaborative robotic arm can interact directly with humans directly, which means that the collaborative robotic arm can work with humans together.Most industrial robotic arms are in centrosymmetric structure, and collaborative robotic arms are in UR structure.
      alt text
      We will start with the two robotic arms.

      Introduction

      myCobot 280-M5Stack:
      myCobot 280-M5Stack is a 6-axis collaborative robot powered by M5Stack-Basic with multiple functions, it is designed with UR structure.

      mechArM 270-M5Stack:
      mechArm 270-M5Stack is similar to myCobot, but the structure of mechArm is centrosymmetric.

      Development
      The two robotic arms can do the same functions. myCobot and mechArm support users to quickly build a robotic arm programming environment and understand the arms control logic. It supports development in multiple languages such as Python, c++, c#, JavaScript, etc. Elephant Robotics provides a Gitbook for quickly building a robotic arm development environment with detailed tutorials on everything from setting up the environment to controlling the robotic arm.
      ROS demo
      alt text

      Use the slider to control myCobot
      alt text
      MoveIt, Planning the movement of myCobot.

      They can also work with AI(artificial intelligent) Kit to learn machine vision and robotic arm movements together.
      video demo:
      https://www.youtube.com/watch?v=Y51VIikAhcs
      alt text
      The interfaces on the end of the robotic arm are the LEGO interfaces, we can use the accessories from Elephant Robotics or make by ourselves through 3D printing to complete our development needs.

      Moreover, both of them support users to do secondary development, mainstream programming language development, and complete platform system development.

      So what are the differences between them? Let’s look at their configuration.
      alt text
      The differences in working radius, positioning accuracy, and range of joint movements are due to their different structures.

      Structure

      The centrosymmetric structure of mechArm is currently the most widely used and classic type worldwide.
      alt text
      The mechArm’s joint 2, 3, and 4 are all bilaterally supported, allowing for a more stable and smooth arm movement, which is why the centrosymmetric structure has been used again for decades.

      The UR structured robotic arm joint works without holding, so it has a wider working radius and can move very flexibly. However, there are some deviations in the movement. Because without the holding, the robotic arm needs to rely on the motors to keep stable.

      Joint rotation range

      mechArm
      0_1667210307356_mecharm.jpg
      myCobot
      alt text
      mechArm is limited in terms of movement, and myCobot is more flexible.

      Summary

      0_1667210226967_11.png
      mechArm is suitable for learning in the direction of industrial robotic arms, while myCobot is suitable for human-machine collaboration scenarios.
      Both robotic arms represent the current mainstream types, each with advantages and disadvantages. We hope this article will help you choose a robotic arm that can meet your needs. If you still want to know more, feel free to comment below. If you like this article, please give us your support and praise. Your like is our motivation to update!
      Learn more about us:
      Home | Elephant Robotics
      GitHub | Elephant Robotics
      Shop | Elephant Robotics

      posted in General
      ElephantRobotics
    • myCobot 280-Ard conveyor control in an industrial simulation

      This article was created by "Cang Hai Xiao" and is reproduced with the author's permission.
      This article starts with a small example of experiencing a conveyor belt in a complete industrial scene.
      Function description
      A small simulated industrial application was made with myCobot 280-Arduino Python API and load cells. I use the toy conveyor belt to transfer the parts to the weighing pan and use M5Stack Basic as the host to build an electronic scale to weigh the parts coming off the conveyor belt and then transmit the weight information in the weighing pan to the PC through UART.
      Using the PC as the host computer, a simple GUI was written in python to display the weights in real time and allow the user to modify the weighing values.
      Here is a detailed video of how this project works.
      https://youtu.be/gAF7T5GvdYE
      The following is the detailed process of the project.

      Software
      alt text
      Hardware
      alt text
      Process
      Step1:
      Burn Mega 2560 and ATOM firmware.(Check the Gitbook for details)
      alt text
      Step2:
      Write a weighing program and upload the program to M5Stack Basic.

      Description:

      1. The serial port is initialized and the connection mode is set. Establishing communication between the PC and M5stack Basic
        alt text

      2. Calculating ratio factors.The data read from the sensor using the M5Stack Basic are initial data and need to be calibrated with a 100g weight and a 200g weight to calculate the conversion factor into "grams". In this case we have calculated a ratio factor of -7.81.

      3. Calculate the readings from the load cell and the conversion factor, and display as the weighing value.

      4. Use UART1 to send the data in every 20ms. It is recommended to do an average or median filter to reduce the shock during the drop of the part from the hopper.

      5. This is the event corresponding to the zero button, 100ms for button de-jitter

      This is a simple electronic scale program written for UIFlow. It can also be sent to a PC via uart1 via TTL-USB and written to M5Stack Basic with a single click on Download. I have used the offline version of UIFlow for ease of connection and debugging.

      Step3:
      Use myBlockly to debug the parameters for the press (drop arm) and release (lift arm) actions

      Step4:
      Writing PC programs and installing pymyCobot.
      alt text
      alt text
      (1) First, write the GUI interface by the Tkinter library. We can set the threshold for the weighing control. For example, in this commissioning, I set 5g.
      alt text
      (2) Importing pymycobot
      alt text
      (3) A callback to the OK button first allows the myCobot drop arm to power on the conveyor, the conveyor starts working, and the electronic scale monitors the weight in real time. The loading() function is responsible for reading the serial weighing data. Then determine if the threshold is reached and control the myCobot lift arm if the threshold is reached.
      Code:

      #============
      # Function:
      # 1.Setting of the weighing values, displayed in the GUI.
      # 2.Use the processing bar to show the progress of the weighing
      # 3.When the target value of 99% is reached, a command is given to # myCobot to perform a stop operation.
      # date: 2022-11-10
      # version: 0.2
      # Joint Adjustment:Combined with the myCobot button and release #action
      #============
      from tkinter import *
      import tkinter.ttk
      import serial
      import time
      from pymyCobot.myCobot import myCobot
      from pymycobot.genre import Coord
      #====Global variable initialisation
      global val #Measured weight
      val=0.0
      global iset #Scale factor, based on set values,setvalue/100
      iset=5/5
      global c_set #Input box to form weighing judgement criteria
      c_set=0.0
      global action_flag
      action_flag=False
      # Set download maximum
      maxbyte = 100
      #======myCobot initialization
      mc = myCobot('COM23',115200)
      mc.power_off()
      time.sleep(2)
      mc.power_on()
      time.sleep(2)
      print('is power on?')
      print(mc.is_power_on())
      time.sleep(2)
      mc.send_angles([95.97,(-46.4),(-133.3),94.3,(-0.9),15.64],50) #Arm lift
      time.sleep(2)
      #==================
      #Serial port initialization
      try:
      arduino = serial.Serial("COM25", 115200 , timeout=1)
      except:
      print("Port connection failed")
      ReadyToStart = True
      #Show processing bar function
      def show():
      mc.send_angles([95.6,(-67.2),(-130.3),101.9,(-2.2),23.11],50) #down
      # Set the current value of the progress bar
      progressbarOne['value'] = 0
      # Set the maximum value of the progress bar
      progressbarOne['maximum'] = maxbyte
      # Calling the loading method
      loading()
      #Process functions
      def loading():
      global byte
      global val
      global action_flag
      c_set=setvalue.get()
      iset=100/float(c_set) #Calculation of scaling systems
      byte = arduino.readline().decode('utf-8')
      try:
      if len(byte) !=0 :
      val= byte
      else:
      pass
      except:
      pass
      if (1-(float(c_set)-float(val))/float(c_set))>=0.99 and action_flag==False: #Control myCobot movement when the remaining value is less than 5%
      print("triger")
      mc.send_angles([95.97,(-46.4),(-133.3),94.3,(-0.9),15.64],50) #up
      action_flag=True #Make sure you only act once, unless RESET
      # Set the progress of the processing bar pointer
      progressbarOne['value'] =(1-(float(c_set)-float(val))/float(c_set))*100
      #float(val)*iset
      #Display of implementation weighing data in label4
      strvar.set(str(float(val)))
      # Call the loading method again after 100ms
      progressbarOne.after(20, loading)
      #reset button callback function
      def reset_click():
      global action_flag
      action_flag=False #Reset flag word to prepare for the next action
      pass
      #Reset flag word to prepare for the next action
      def ok_click():
      show()
      pass
      #UI design===========
      #Main window
      win =  tkinter.Tk()
      win.title("mycobot")
      #Create a frame form object
      frame = tkinter.Frame (win,  borderwidth=2, width=450, height=250)
      # Fill the form horizontally and vertically
      frame. pack ()
      #Create "Position 1"
      Label1 = tkinter.Label ( frame, text="Set value (g)")
      # Using place, set the position of the first label from the upper left corner of the form (40,40) and its size (width, height)
      # Note that the (x, y) position coordinates here refer to the position of the upper left corner of the label (absolute positioning is performed with the upper left corner of NW, and the default is NW)
      Label1.place (x=35,y=15, width=80, height=30)
      # set data input setvalue
      setvalue = tkinter.Entry (frame, text="position2",fg='blue',font=("微软雅黑",16))
      #,bg='purple',fg='white')
      #Use the upper right corner for absolute positioning, and the position is (166, 15) away from the upper left corner of the form
      setvalue.place(x=166,y=15,  width=60, height=30)
      # set tab 3
      Label3 = tkinter.Label (frame, text="Real Value (g)")
      #,bg='green',fg='white')
      # Set the horizontal starting position to 0.6 times the horizontal distance of the form, the absolute vertical distance is 80, and the size is 60, 30
      Label3.place(x=35,y=80, width=80, height=30)
      # Set label 4, place the measured weight value, the default is 0.0g
      strvar = StringVar()
      Label4 = tkinter.Label (frame, textvariable=strvar,text="0.0",fg='green',font=("微软雅黑",16))
      #,bg='gray',fg='white')
      # Set the horizontal starting position to 0.01 times the horizontal distance of the form, the absolute vertical distance to 80, set the height to 0.5 times the form height ratio, and set the width to 80
      Label4.place(x=166,y=80,height=30,width=60)
      progressbarOne = tkinter.ttk.Progressbar(win, length=300, mode='determinate')
      progressbarOne.place(x=66,y=156)
      # Call a function using a button control
      resetbutton = tkinter.Button(win, text="Reset", width=15, height=2,command=reset_click).pack(side = 'left',padx = 80,pady = 30)
      # Call a function using a button control
      okbutton = tkinter.Button(win, text="OK", width=15, height=2,command=show).pack(side = 'left', padx = 20,pady = 30)
      #start event loop
      win. mainloop()
      

      Step5:
      The program is debugged step by step:

      (1) Debug the electronic scale to ensure that the weighing is correct, and use weights for calibration. Make sure the datas are correct.

      (2) Connect myCobot to the conveyor belt, and install a simple button at the end of myCobot, which can trigger the power supply of the conveyor belt when the arm is lowered.

      (3) Joint debugging. Set the threshold in the GUI, trigger myCobot to drop the arm, and then the conveyor belt starts to run (parts are transported and fall into the hopper, weighed in real time), and trigger the myCobot to lift the arm after reaching the threshold (5g).
      alt text
      Summary
      This is a simulated industrial application to demonstrate the control function of myCobot 280 Arduino. We transmit the weighing data to the PC through the sensor plus M5Stack Basic and indirectly feedback on the running status of the conveyor belt. Receive the weighing data to monitor the transportation of parts on the conveyor belt. When the threshold is reached, the myCobot will trigger the arm-lifting action.

      The program is elementary, and the host computer only has 150 lines. The difficulty is minimal and suitable for beginners to get started. Understanding, adjusting, and acquiring the robotic arm's electrical, mechanical, and parameters.

      Thanks to Elephant Robotics' Drift Bottle Project for the myCobot 280 Arduino.

      Article from:https://www.arduino.cn/thread-111757-1-1.html

      posted in PROJECTS
      ElephantRobotics
    • RE: Smart Applications of Holography and Robotic Arms myCobot 320 M5Stack-Basic

      @holofloh
      Thank you for your comment!
      We have also looked up relevant information and realized that this is not holographic technology, which was our mistake. The technology we used in the project is auto-stereoscopic imaging technology, based on the principle of Perspective Of View (POV), which does not achieve as good of an effect as holography。
      Thank you once again for your message! We will try to use real holographic technology in combination with robotic arms in the future!

      posted in PROJECTS
      ElephantRobotics
    • RE: Robotic Arms powered by M5stack

      @ajb2k3
      Thanks. I'm glad to join this forum!

      posted in PROJECTS
      ElephantRobotics
    • The highest cost-performance mobile robotic platform for individual developers.

      1. Introduction
      myAGV is the first mobile robot of Elephant Robotics. It uses the competition-level Mecanum wheels and a full wrap design with a metal frame. There are two built-in slam algorithms to meet the learning of mapping and navigation directions. It provides multiple interfaces and can be equipped with different robotic arms to become a compound robot to achieve more applications.
      alt text
      1.1 Mecanum Wheel:
      The installation of the Mecanum wheel enables myAGV to perform the all-direction moving, which can save a lot of unnecessary paths when moving towards the destination and move more flexibly for a massive range of motion.
      1.2 Detachable
      The fully wrapped design with a metal frame makes the myAGV more compact and tough. The built-in Raspberry Pi 4B and split structure make the robot can be disassembled independently, and users can design and create a DIY robot.

      2. Functions
      2.1 Mapping
      SLAM mapping is required in the use of myAGV because the core of the mobile robot to achieve autonomous walking is to achieve autonomous positioning and navigation. In independent positioning and navigation technology, problems such as positioning, mapping, and path planning will be involved. The quality of the map construction will directly affect the walking path of myAGV. If myAGV wants to reach a specific destination, it must draw a map like humans. The process of describing the environment and understanding the environment relies on the map. The two mapping algorithms used by the myAGV are introduced below.alt text
      2.1.1 Gmapping algorithm
      Gmapping is an efficient particle filter. It is a SLAM algorithm based on 2D lidar using the RBPF (Rao-Blackwellized Particle Filters) algorithm to complete the construction of a two-dimensional grid map. Gmapping can build indoor maps in real-time, requiring less computation and higher accuracy in making small scene maps.
      RUN:
      First, place the mobile robot at an appropriate starting point in the environment where the map needs to be built because opening the launch file will open the IMU sensor and Odom odometer, and artificially moving it will cause the mobile robot to be distorted. First open the SLAM Scan file,
      Run the command:

      cd myagv_ros
      source ./devel/setup.bash
      roslaunch myagv_odometry myagv_active.launch
      

      alt text
      Then open the gmapping mapping file, and control the myAGV to move in the required space.
      Run the command:

      roslaunch myagv_navigation myagv_slam_laser.launch
      

      alt text
      Save the map when you see that the required space is created in the image
      Run the command to save the created map:

      rosrun map_server map_saver
      

      2.1.2 Cartographer algorithm
      Cartographer is a set of SLAM algorithms based on graph optimization.
      The cartographer algorithm mainly uses the concept of Submap. Whenever the data of a laser Scan is obtained, it is matched with the currently recently established Submap so that the laser Scan data of this frame is inserted into the optimal position on it. The Submap is also updated while inserting new data frames. A certain amount of data is combined into a Submap, if there is no new Scans are inserted into the Submap, it is considered that the Submap has been created, and then the next Submap will be created. The specific process is as follows.
      0_1662101428056_p4.png
      All created Submaps and the current laser Scan will be used for Scan matching for loopback detection. Loopback detection is performed if the recent Scan and all created Submaps are close enough in the distance. After the loopback detection is completed, the map construction is completed, too.
      The process of building a map with the cartographer algorithm is the same as the process of building a map with the Gmapping algorithm. First, open the ridar and the cartographer algorithm file to control the myAGV to walk in the area built to complete the map.
      alt text
      Some people may doubt why introducing two algorithms for building a map.
      Let's compare the two algorithms to solve this doubt.
      Gmapping can build indoor maps in real-time, requiring less computation and higher accuracy in building small scene maps. Compared with Cartographer when building a small scene map, Gmapping does not require too many particles and has no loopback detection, so the amount of calculation is less than that of Cartographer, and the accuracy is not much worse. When building a small scene map, compared with Cartographer, Gmapping is not only fast but also can build high-precision maps, and the number of particles required for Gmapping to expand the map can easily burst the memory. The latter can solve the problem of map expansion, and the effect of mapping can be excellent (the above picture is in the same terrain).
      0_1662101543574_gvsc.jpg
      2.2 Map Navigation
      In the map we built in the previous step, the robot can automatically navigate to a certain destination on the map. This all stems from the navigation function package of ROS.
      0_1662101568126_p6.png
      move_base in navigation:
      global planner:
      The overall path planning is carried out according to the given target position.
      In the navigation of ROS, the global route of the robot to the target position is calculated first through international path planning. The Navfn package implements this function. Through Dijkstra's optimal path algorithm, Navfn calculates the minimum cost path on the cost map as the robot's global route.
      local planner:
      Plan the avoidance route based on nearby obstacles.
      Local real-time planning is implemented using the base_local_planner package. This package uses the Trajectory Rollout and Dynamic-Window approaches algorithms to calculate the speed and angle (dx, dy, dtheta velocities) the robot should travel during each cycle.
      The base_local_planner package uses map data to search for multiple paths to the target through algorithms, uses some evaluation criteria (whether it will hit an obstacle, the time required, etc.) to select the optimal path, and calculate the required real-time speed and angle.
      Next, we will introduce the process of our reproduction:
      In the beginning, we had to determine the location of myAGV and used ACML in ROSto locate it. (Find the location of myAGV on the map)
      alt text
      After a few laps in place to complete the positioning, we can start automatic navigation.
      alt text
      When we click "2D Nav Goal" and then click the point we want to reach on the map, myAGV will start towards the target point, and we can also see in RVIZ that there is a planned path for myAGV between the starting point and the target point, it will move along the route to the target point.
      2.3 PS2 handler control
      As a mobile robot, wireless control is essential.
      We developed the ps2 handler to control myAGV so that myAGV can move freely and realize more possibilities.
      The following is the sequence diagram of our handle control. Maybe it can provide some ideas for you to do more development on the handle.
      0_1662101652187_手柄控制.drawio.png
      3 Summary
      What would you do if you had a mobile robot like this? What kind of myAGV can you refit it into in its split-structure detachable mobile robot? Is it to add some weapons to the mobile robot as a combat vehicle or to refit it to carry a variety of machines full of the future sense?
      Thank you for watching. Please comment if you have any good ideas!
      The above is our initial introduction to myAGV, and the follow-up will introduce myAGV equipped with different robotic arms, including the myCobot, myPalletizer, and mechArm.
      If you own an M5Stack, how would you combine it with myAGV?

      posted in General
      ElephantRobotics
    • Control the most compact compound robot to do the intelligent sorting project

      Introduction

      Our theme is to break the distance limit of the collaborative robotic arm and connect it with the mobile robot (myAGV) to realize a case.
      The two devices we are going to use today are:
      1 mechArm 270-M5Stack:
      mechArm 270-M5Stack is the most compact six-axis robotic arm with an industrial-like configuration launched by Elephant Robotics. It's tough but reliable for its centrosymmetric structure, and its effectiveness enables users to increase programming efficiency with reliability.
      0_1663923863260_270.jpg
      2 myAGV:
      myAGV is the first mobile robot of Elephant Robotics. It adopts competition-grade Mecanum wheels and fully wraps the metal frame. The ROS development platform has two built-in slam algorithms ta o meet the learning of mapping and navigation directions. It provides rich expansion interfaces and can be equipped with robotic arms(myCobot280,mechArm270,myPalletizer260).
      0_1663923914020_agv.jpg

      Case

      What we want to achieve today is the case of the combination of mechArm270-M5Stack and myAGV as a compound robot by controlling the myAGV to move to the designated position and then controlling the mechArm270 M5Stack to grab the wooden block myAGV and move it to the designated position.

      Demo

      Connection

      To bring two robots together, they must first be connected. There are two ways to establish the connection:
      ● Wireless connection (TCP/IP)
      Let myAGV establish contact through the IP address of the mechArm 270-M5Stack. First, set them in the same WiFi network environment, and obtain the IP address of the mechArm 270 M5Stack. When the team designed the M5Stack Basic, Elephant Robot designed the function of displaying the IP address, which can quickly obtain the IP address. (Port defaults to 9000)
      0_1663923934097_1.1.png
      Briefly introduce the socket method: a function used to establish communication in python, which can send information to each other.
      The elephant robot has an open source library, pymycobot, which encapsulates a MyCobotSocket() method, similar to the socket method, to send instructions to the robotic arm.
      Code:

      from pymycobot import MyCobotSocket
      # MyCobotSocket(ip,port) port defaults to 9000
      mc = MyCobotSocket("192.168.10.22","9000")
      
      #After the normal connection, you can send commands to the robot arm, such as returning the robot arm to the origin.
      mc.send_angles([0,0,0,0,0,0],20)
      #Get the angle information of the current robotic arm.
      res = mc.get_angles()
      print(res)
      
      

      ● Wired connection
      The wired connection is relatively easy. Plug in a type-C data cable to connect to myAGV, then we can control the robotic arm.
      Note: After connecting, because of the rules of Ubuntu system, we need to grant permissions to the robotic arm's serial port to operate normally. Type in the terminal

      sudo chmod 777 /dev/tty***(***refers to the serial port number of the robotic arm)
      

      0_1663924133177_1.2.png.jpg

      Control

      Let myAGV move
      Once connected, we can start controlling this compound robot.
      In the movement of myAGV, the elephant robotics provides two control methods: keyboard control and PS2 handler control.
      The ros language controls it. (below is how to do it)
      Keyboard control:
      Start Node:

      Enter the command in the terminal.

      roslaunch myagv_odometry myagv_active.launch
      

      0_1663924239607_1.3.png

      Open the keyboard control interface
      Enter the command in the terminal.

      roslaunch myagv_teleop myagv_teleop.launch
      

      0_1663924251412_1.4.png
      Press the corresponding button on the keyboard to make myAGV move.
      myAGV uses a Mecanum wheel that can move in all directions and an IMU for positioning compensation. It can be turned around on the spot, and the control is very easy.
      PS2 handler control:
      The first step is to start the node, and the second control program is to open the PS2 handler.
      Enter the command in the terminal.

      roslaunch myagv_ps2 myagv_ps2.launch
      

      After running, can freely control myAGV through the PS2 handler.
      Implementation of the case
      Grab the block with the robotic arm and put it into the corresponding bucket.
      Combining the control of the mobile robot and the control of the robotic arm, this project can be realized.
      First, start the mobile control of myAGV with either keyboard control or PS2 handler control. I choose PS2 handler control here. Move the robotic arm in front of the small piece of wood, send code to mechArm to control its movement, and control the gripper to grab the piece of wood. It is placed in the corresponding location.
      Code for mechArm:

      from pymycobot import MyCobotSocket
      
      mc = MyCobotSocket("192.168.10.22","9000")
      
      mc.send_angles([0,0,0,0,0,0],50)
      time.sleep(2)
      mc.send_angles([1.75,58.53,29.44,4.92,(-77.69),5.09],50)
      time.sleep(2)
      mc.send_angles([1.75,76.02,(-25.31),12.3,(-61.61),(-2.81)],50)
      time.sleep(2)
      mc.set_gripper_state(0, 80)
      time.sleep(1)
      mc.set_gripper_value(50,80)
      time.sleep(1)
      mc.send_angles([2.37,(-2.1),(-9.66),9.66,68.64,(-33.13)],50)
      time.sleep(2)
      mc.send_angles([2.81,48.77,(-10.1),2.63,(-55.72),(-30.32)],50)
      time.sleep(1)
      mc.set_gripper_state(0, 80)
      

      0_1663924296053_微信图片_20220922182515.jpg

      Summary

      I don't know what you think about this case, if you have any ideas or opinions, please leave a message below! I'll take interesting suggestions to try!
      More information:
      Home | Elephantrobotics
      Gitbook | Elephantrobotics
      GitHub| Open-Source

      posted in General
      ElephantRobotics
    • RE: First experience with myCobot280-M5Stack

      This looks great.
      Hope you can share more of your operating experience!

      posted in General
      ElephantRobotics
    • RE: First experience with myCobot280-M5Stack

      @kehu Of course,You can try out various developments and feel free to contact us if you come across mycobot280 that you cannot manipulate.

      posted in General
      ElephantRobotics
    • RE: myAGV | SLAM-based autonomous localization and navigation for robots

      Of course you can, but you need to take into account the power supply. myAGV comes with its own battery 12V 2A, and the addition of the m5stack development board will reduce the performance in terms of range.

      posted in General
      ElephantRobotics