Stereovision cameras subassembly.

Stereovision is quite an important part of Mimas: it is not really compulsory, but it may massively facilitate autonomous navigation if running a good algorithm. In addition, stereo cameras can be used to take pictures that can be "sent to Earth" (or in other words, to my PC). Using a pair of red-and-blue glasses, it is possible to obtain an old-fashioned 3D photo effect. As mentioned in previous posts about Computer Vision, a PS4 camera was selected to be assembled on the Mast Tower. This required designing a custom enclosure for the cameras' motherboard and conducting some tests before assembling it to the bottom of the Imager. These tests were run in the open air, placing the stereo cameras on top of a tripod.

  • [Hours of work: 2]
  • [People involved: Giorgio]

enter image description here

Assembling the stereocameras.

The stereo vision cameras attached to the Remote Sensing Mast came with a proprietary enclosure designed by Sony, but unfortunately, it has a shape that is not ideal for our purpose. For this reason, the original black casing has been removed and the camera lens protectors reattached after that. The cable that has been modified in October also needed some improvements, mainly in terms of shielding the wires.

  • [Hours of work: 3]
  • [People involved: Giorgio]

MasterCam.

Since July 2022, the Mast Tower v.1 has had a cheap 720p webcam installed at the top, operating as the "Imager". When the time has come to choose the camera for the Mast Tower v.2, originally a 1080p camera or 2K camera was considered. However, the purchase has been delayed for several months due to a lack of budget. In the end, it was decided to adopt a different approach: instead of buying a new camera, the hardware used for Mast Tower v.1 was moved to v.2. This choice, apart from being partially obliged due to financial needs, carries an emotional aspect: the camera inside Mast Tower v.1 passed the last 6 months on the workstation, partially abandoned, "looking" at the rover slowly but progressively being built. Now, however, it got a new life, moving from Mast Tower v.1 to Mast Tower v.2.

  • [Hours of work: 2]
  • [People involved: Giorgio]

enter image description here

Horizontal Ultrasonic Mapping And Navigation (HUMAN).

Horizontal Ultrasonic Mapping And Navigation (HUMAN) is a simple system that Mimas inherited from the RC-VSTB, on which it was the only navigation system available. The RC-VSTB was essentially driving blind, detecting obstacles using a single ultrasonic sensor. This system was definitely not accurate, but it was precise enough to avoid major hazards. On Mimas, a proper device has been designed to hold two ultrasonic sensors just in front of the rover's belly. This system would not work on Mars, since its extremely thin atmosphere does not ensure correct transmission of sound, hence correct measurements using ultrasounds. However, Mimas does not drive on Mars! So, HUMAN represents a great and cheap way to help the rover's autonomous navigation. The cameras have been tested, showing relatively low performances in dark environments, whereas ultrasonic sensors are not affected by light conditions. HUMAN cannot be used on its own, because the accuracy is too low, but it will be used to enrich the data collected by the 7 onboard cameras.

  • [Hours of work: 4]
  • [People involved: Giorgio]

enter image description here

Just enough cameras!

Taking advantage of a good discount, four webcams were purchased to reach the total final amount of cameras needed on board. They are all 720p HD cameras with USB connectors, ideal for computer vision tasks since the video stream is light enough for a Rasberry Pi. Full HD cameras (1080p) are also great, but one of the objectives of this project is to make the system as light as possible. Anyway, the four webcams will be added to the already existing three cameras to get a total of seven onboard cameras that will be used for the following purposes: 1x MasterCam (on the Mast), 2x NavCam (Navigation Cameras - on the Mast), 1x HazCam (Hazard Camera - rear of the rover), 2x SurCam (Surrounding Camera - LHS and RHS of the rover).

  • [Hours of work: -]
  • [People involved: Giorgio]

enter image description here

The "Green Lines Algorithm"

Mimas Rover will have an autonomous navigation capability and to achieve it, multiple systems will be used, one of which is the "Green Lines Algorithm" (GLA). This algorithm is still a work in progress, since June 2022. This post will not go in-depth with this algorithm since it still needs several improvements. In order to do some tests (the rover structure is not ready yet, hence a real driving test is impossible at the moment) a simulation of the Mars surface was made, modelling the terrain in Blender. Then, a webcam was placed in from of the laptop screen and the algorithm was run to test its efficiency. Despite the results are still not great and the GLA requires several improvements, the main principle of the algorithm showed to work correctly: rocks and possible hazards are detected!

  • [Hours of attendance: >20h]
  • [People involved: Giorgio]

enter image description here

Stereo vision.

Stereo vision is how humans perceive depth. The word "stereo" means “two.” Humans look at the same scene from two different viewpoints to get a sense of depth. So, in order to simulate it, two identical cameras are required: a gaming console stereo camera was bought for this purpose. However, the company that produced the camera made a custom USB 3 cable connector, to let it be usable only on their consoles. To overcome this problem, a common USB 3 cable was cut and tin-soldered to the camera cable, essentially stitching them together. This operation required a series of delicate steps and, in the end, all cables were shielded to avoid electromagnetic interferences while operating. Another challenge was to implement custom open-source firmware to communicate directly with the camera hardware. The final result works quite well and a few depth perception tests were carried out.

  • [Hours of work: 4h]
  • [People involved: Giorgio, Akshit]

enter image description here

enter image description here

Obstacle detection.

Obstacle detection and target tracking are two major issues for autonomous vehicles, and Mars Rovers require this technology to ensure safe driving on the surface of the Red Planet. Developing and early-stage obstacle avoidance using OpenCV required dozens of hours: the system's pre-requirements were mainly focussed on a simple, reliable solution that could be adjusted easily side by side with the evolution of the Mimas prototype. At the end of this session, an acceptable design was reached.

  • [Hours of work: 40h]
  • [People involved: Giorgio]

Monocular vision target distance calculation.

Alongside the designing process of the Mimas Rover, computer vision experiments were carried out. A focus was made on coding an algorithm to calculate the distance of the camera from a given target, using monocular vision in OpenCV. The distance is estimated by calculating the dimension of the target and its position with respect to the centre of the frame. This algorithm will require several improvements before being actually useful for driving the Mimas Rover autonomously.

  • [Hours of work: 10h]
  • [People involved: Giorgio]

Color detection.

Computer vision is one of the most exciting divisions of computer science, it consists of a series of operations made using Artificial Intelligence (AI) to process images and videos. From the perspective of engineering, its aim is to automate tasks that the human visual system can do. For this project, the OpenCV library is used. OpenCV is an Open Source library that can take advantage of multi-core processing and hardware acceleration. The first step made to approaching computer vision was to be able to detect colours, and geometrical shapes and compute simple tasks using data collected from a camera.

  • [Hours of work: 4h]
  • [People involved: Giorgio]