Sensors / Sensing Systems Archives - The Robot Report https://www.therobotreport.com/category/technologies/sensors-sensing/ Robotics news, research and analysis Sun, 28 Apr 2024 12:31:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Sensors / Sensing Systems Archives - The Robot Report https://www.therobotreport.com/category/technologies/sensors-sensing/ 32 32 Forcen closes funding to develop ‘superhuman’ robotic manipulation https://www.therobotreport.com/forcen-closes-funding-to-develop-superhuman-robotic-manipulation/ https://www.therobotreport.com/forcen-closes-funding-to-develop-superhuman-robotic-manipulation/#respond Sun, 28 Apr 2024 12:30:18 +0000 https://www.therobotreport.com/?p=578877 Forcen is offering customized and off-the-shelf sensors to aid robotic manipulation in complex environments.

The post Forcen closes funding to develop ‘superhuman’ robotic manipulation appeared first on The Robot Report.

]]>
Forcen has raised funding to scale production of its force/torque sensors.

Forcen says its technology will help robotic manipulation advance as vision has. Source: Forcen

Forcen last week said it has closed a funding round of CAD $8.35 million ($6.1 million U.S.). The Toronto-based company plans to use the investment to scale up production to support more customers and to continue developing its force/torque sensing technology and edge intelligence.

“We’ve been focused on delivering custom solutions showcasing our world-first technology with world-class quality … and we’re excited for our customers to announce the robots they’ve been working on with our technology,” stated Robert Brooks, founder and CEO of Forcen. “Providing custom solutions has limited the number of customers we take on, but now we’re working to change that.”

Founded in 2015, Forcen said its goal is to enable businesses to easily deploy “(super)human” robotic manipulation in complex and unstructured applications. The company added that its technology is already moving into production with customers in surgical, logistics, humanoid, and space robotics.

Forcen offers two paths to robot manipulation

Forcen said its new customizable offering and off-the-shelf development kits will accelerate development for current customers and help new ones adopt its technology.

The rapidly customizable offering will use generative design and standard subassemblies, noted the company. This will allow customers to select the size, sensing range/sensitivity, overload protection, mounting bolt pattern, and connector type/location.

By fulfilling orders in as little as four to six weeks, Forcen claimed that it can replace the traditional lengthy catalog of sensors, so customers can get exactly what they need for their unique applications.

The company will launch its off-the-shelf development kits later this year. They will cover three degree-of-freedom (DoF) and 6 DoF force/torque sensors, as well as Forcen’s cross-roller, bearing-free 3 DoF joint torque sensor and 3 DoF gripper finger.

Forcen's off-the-shelf development kits.

Off-the-shelf development kits will support different degrees of freedom. Source: Forcen

Force/torque sensors designed for complex applications

Complex and less-structured robotics applications are challenging for conventional force/torque sensing technologies because of the risk of repeated impact/overload, wide temperature ranges/changes, and extreme constraints on size and weight, explained Forcen. These applications are becoming increasingly common in surgical, logistics, agricultural/food, and underwater robotics.

Forcen added that its “full-stack” sensing systems are designed for such applications using three core proprietary technologies:

  • ForceFilm — A monolithic thin-film transducer enabling sensing systems that are lighter, thinner, more stable across both drift and temperature, the company said. It is especially scalable for multi-dimensional sensing, Forcen said.
  • Dedicated Overload — A protection structure that acts as a 6 DoF hard stop. The company said it allows sensitivity and overload protection to be designed separately and enables durable use of the overload structure for thousands of overload events while still achieving millions of sensing cycles.
  • Synap — Forcen’s onboard edge intelligence comes factory compensated/calibrated and can connect to any standard digital bus (USB, CAN, Ethernet, EtherCAT). This can “create a full-stack force/torque sensing solution that is truly plug-and-play with a maintenance/calibration-free operation.
Forcen sensors include three proprietary technologies.

New offerings include features to support demanding robotics applications. Source: Forcen

Learn about Forcen at the Robotics Summit

Brightspark Ventures and BDC Capital’s Deep Tech Venture Fund co-led Forcen’s funding round, with participation from Garage Capital and MaRS IAF, as well as returning investors including EmergingVC.

“Robotic vision has undergone a revolution over the past decade and is continuing to accelerate with new AI approaches,” said Mark Skapinker, co-founder and partner at Brightspark Ventures. “We expect robotic manipulation to quickly follow in the footsteps of robotic vision and Forcen’s technology to be a key enabler of ubiquitous human-level robotic manipulation.”

Forcen is returning to the Robotics Summit & Expo this week. It will have live demonstrations of its latest technology in Booth 113 at the Boston Convention and Exhibition Center. 

CEO Brooks will be talking on May 1 at 4:15 p.m. EDT about “Designing (Super)Human-Level Haptic Sensing for Surgical Robotics.” Registration is now open for the event, which is co-located with DeviceTalks Boston and the Digital Transformation Forum.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The post Forcen closes funding to develop ‘superhuman’ robotic manipulation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/forcen-closes-funding-to-develop-superhuman-robotic-manipulation/feed/ 0
Bota Systems launches PixONE force-torque sensors for robotics https://www.therobotreport.com/bota-systems-launches-pixone-force-torque-sensors-for-robotics/ https://www.therobotreport.com/bota-systems-launches-pixone-force-torque-sensors-for-robotics/#respond Tue, 23 Apr 2024 17:09:34 +0000 https://www.therobotreport.com/?p=578822 Bota Systems says it designed its PixONE force-torque sensors to keep cabling inside robotic arms.

The post Bota Systems launches PixONE force-torque sensors for robotics appeared first on The Robot Report.

]]>
Bota Systems' new PixONE force-torque sensor on an industrial robot.

The new PixONE force-torque sensor on an industrial robot. | Source: Bota Systems

Bota Systems AG today released PixONE, a sensor that brings together high-performance electronics with a compact, lightweight design. Founded in 2020 as an ETH Zurch spin-off, the company specializes in multi-axis force-torque sensors. 

Zurich-based Bota said it designed its latest sensors for “seamless integration into robotic systems.” PixONE features a through-hole architecture facilitating internal cable routing to enhance robot agility and safety, claimed the company.

The sensor’s hollow shaft design makes it easier for users to connect a robot’s arm and end-of-arm tooling (EOT or EOAT) while maintaining the integrity of internal cable routing, said Bota Systems. It added that this design can be particularly helpful because many robot arm manufacturers are moving toward internal routing to eliminate cable tangles and motion restrictions. 

“Our objective is to equip robots with the sense of touch, making them not only safer and more user-friendly, but also more collaborative,” stated Klajd Lika, co-founder and CEO Bota Systems. “PixONE is an advanced, OEM-ready sensing solution that enables robot developers and integrators to effortlessly enhance any robot in development with minimal integration effort.”

PixONE minimalist design is lightweight

PixONE has a minimalistic two-piece design. Bota Systems said this simplifies the assembly and significantly reduces the sensor’s weight, making it 30% lighter than comparable sensors on the market. This is critical for dynamic systems such as fast-moving robots, where excess weight can impede performance and operational efficiency, it said. 

Bota Systems offers PixONE in various models with an external diameter starting at 2.36 in. (60 mm) and a through-hole diameter of 0.59 in. (15 mm). The sensors include an inertial measurement unit (IMU) and have a IP67 waterproof rating. The company said these features make it suitable for a wide range of operational environments. 

“The PixONE offers a higher torque-to-force ratio than comparative sensors with integrated electronics, which gives integrators more freedom in EOT design, especially with larger tools,” said Ilias Patsiaouras, co-founder and chief technology officer of Bota Systems. “PixONE elevates the sensor integration by offering internal connection and cable passthrough, making it ideal for a wide spectrum of robotic applications, ranging from industrial to medical.”

The PixONE configurations can support payloads up to 551 lb. (250 kg). Bota said it maintained a uniform interface across all models to facilitate rapid integration.

The PixONE’s design also minimizes external connections and component count, enhancing system reliability, according to the company. PixONE uses EtherCAT technology for high-speed data communication and supports Power over Ethernet (PoE).

See Bota Systems at the Robotics Summit & Expo

Bota Systems is an official distribution and integration partner of Universal Robots and Mecademic. In October 2023, it added NEXT Robotics to its distributor network.

That same month, Bota Systems raised $2.5 million in seed funding. The company said it plans to use the funding to grow its team to address increasing demand by leading research labs and manufacturing companies. It also plans to accelerate its product roadmap.

To learn more about Bota Systems, visit the company at Booth 315 at the Robotics Summit & Expo, which will be on May 1 and 2 in Boston.

“Our vision is to equip robots with the sense of touch, making them not only safer and more user-friendly, but also more collaborative,” Klajd Lika, co-founder and CEO of Bota Systems, told The Robot Report. “We look forward to the Robotics Summit & Expo because it brings together the visionaries and brightest minds of the industry — this interaction is valuable for us to shape the development of our next generation of innovative sensors.” 

This will be the largest Robotics Summit ever. It will include more than 200 exhibitors, various networking opportunities, a women in robotics breakfast, a career fair, an engineering theater, a startup showcase, and more. Registration is now open for the event.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The post Bota Systems launches PixONE force-torque sensors for robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/bota-systems-launches-pixone-force-torque-sensors-for-robotics/feed/ 0
Advanced Navigation’s Hydrus explores shipwrecks in the Indian Ocean https://www.therobotreport.com/advanced-navigations-hydrus-explores-shipwrecks-indian-ocean/ https://www.therobotreport.com/advanced-navigations-hydrus-explores-shipwrecks-indian-ocean/#respond Sun, 21 Apr 2024 12:30:31 +0000 https://www.therobotreport.com/?p=578771 Advanced Navigation recently sent Hydrus to the depths of the Rottnest ship graveyard, located off the coast of Western Australia. 

The post Advanced Navigation’s Hydrus explores shipwrecks in the Indian Ocean appeared first on The Robot Report.

]]>
Advanced Navigation's Hydrus micro autonomous underwater vehicle (AUV) deployed.

Advanced Navigation’s Hydrus micro autonomous underwater vehicle (AUV) deployed. | Source: Advanced Navigation

Advanced Navigation is bringing humans closer to the ocean with Hydrus, a relatively small underwater drone. The company recently sent Hydrus to the depths of the Rottnest ship graveyard, located in the Indian Ocean and just off the coast of Western Australia. 

The Sydney, Australia-based developer of AI robotics and navigation technology said that upon seeing the gathered data, the team discovered a 210-ft. (64-m) shipwreck scattered across the sea floor. This means the wreck was more than twice the size of a blue whale. 

“We’ve found through all of our testing that Hydrus is very reliable, and it will complete its mission and come to the surface or come to its designated return point,” Alec McGregor, Advanced Navigation’s photogrammetry specialist, told The Robot Report. “And then you can just scoop it up with a net from the side of the boat.”

Robot can brave the ocean’s unexplored depths

Humans have only explored and charted 24% of the ocean, according to Advanced Navigation. The unexplored parts are home to more than 3 million undiscovered shipwrecks, and 1,819 recorded wrecks are lying off Western Australia’s shore alone.

These shipwrecks can hold keys to our understanding of past culture, history, and science, said the company.

The Rottnest graveyard is a particularly dense area for these abandoned ships. Beginning in the 1900s, the area became a burial ground for ships, naval vessels, aircraft, and secretive submarines. A majority of these wrecks haven’t been discovered because the depth ranges from 164 to 656 ft. (50 to 200 m). 

Traditionally, there are two ways of gathering information from the deep sea, explained McGregor. The first is divers, who have to be specially trained to reach the depths Advanced Navigation is interested in studying. 

“Some of the wrecks that we’ve been looking at are in very deep water, so 60 m [196.8 ft.] for this particular wreck, which is outside of the recreational diving limit,” McGregor said. “So, you actually have to go into tech diving.”

“And when you go deeper with all of this extra equipment, it tends to just increase the risks associated with going to depth,” he said. “So, you need to have special training, you need to have support vessels, and you also have to be down in the water for a long period of time.”

The second option is to use remotely operated vehicles (ROVs) or autonomous underwater vehicles (AUVs). While this method doesn’t involve putting people at risk, it can still be expensive. 

“Some of the drawbacks with using traditional methods include having to have big support vessels,” McGregor said. “And getting the actual ROVs in and out of the water sometimes requires a crane, whereas with the Hydrus, you can just chuck it off the side of the boat.”

“So, with Hydrus, you’re able to reduce the costs of operation,” he added. “You’re also able to get underwater data super easily and super quickly by just chucking a Hydrus off the boat. It can be operated with one person.”

Advanced Navigation uses ‘wet electronics’

One of the biggest challenges with underwater robotics, McGregor said, is keeping important electronics dry. Conventional ROVs do this with pressure chambers. 

“Traditional ROVs have big chambers which basically keep all the electronics dry,” he noted. “But from a mechanical point of view, if you want to go deeper, you need to have thicker walls so that they can resist the pressure at depth.”

“If you need thicker walls, that increases the weight of the robot,” said McGregor. “And if you increase the weight, but you still want the robot to be buoyant, you have to increase the size. It’s just this kind of spiral of increasing the size to increase the buoyancy.”

“What we’ve managed to do with Hydrus is we have designed pressure-tolerant electronics, and we use a method of actually having what we call ‘wet electronics,'” McGregor said. “This involves basically potting the electronics in a plastic material. And we don’t use it to keep the structural integrity of the robot. So we don’t need a pressure vessel because we’ve managed to protect our electronics that way.” 

Once it’s underwater, Hydrus operates fully autonomously. Unlike traditional ROVs, the system doesn’t require a tether to navigate underwater, and the Advanced Navigation team has limited real-time communication capabilities. 

“We do have very limited communication with Hydrus through acoustic communications,” McGregor said. “The issue with acoustic communications is that there’s not a lot of data that can be transferred. We can get data such as the position of Hydrus, and we can also send simple commands such as ‘abort mission’ or ‘hold position’ or ‘pause mission,’ but we can’t physically control it.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Hydrus provides high-resolution data

While Hydrus has impressive autonomous capabilities, it doesn’t find wrecks all on its own. In this case, McGregor said, Advanced Navigation worked closely with the Western Australian (WA) Museum to find the wreck.

The museum gave the company a rough idea of where a shipwreck could be. Then the team sent Hydrus on a reconnaissance mission to determine the wreck’s exact location. 

“When we got Hydrus back on board, we were able to offload all the data and reconstruct the mission based on the images and from that, we were then able to see where the shipwreck was,” McGregor said. “One of the good things about Hydrus is that we can actually get geo-referenced data onto the water with auxiliary systems that we have on the boat.”

Hydrus gathered 4K geo-referenced imagery and video footage. Curtin University HIVE, which specializes in shipwreck photogrammetry, used this data to rebuild a high-resolution 3D digital twin of the wreck. Ross Anderson, a curator at the WA Museum, closely examined the digital twin. 

Anderson found that the wreck was an over 100-year-old coal hulk from Fremantle Port’s bygone days. Historically, these old iron ships were used to service steamships in Western Australia. 

In the future, the team is interested in exploring other shipwrecks, like the SS Koombana, an ultra-luxury passenger ship. The ship ferried more than 150 passengers before it vanished into a cyclone in 1912.

However, Advanced Navigation isn’t just interested in gaining information from shipwrecks. 

“Another thing we’re doing with a lot of this data is actually coral reef monitoring. So we’re making 3D reconstructions of coral reefs, and we’re working with quite a few customers to do this,” McGregor said.  

Hydrus reduced the surveying costs for this particular mission by up to 75%, according to the company. This enabled the team to conduct more frequent and extensive surveying of the wreck in a shorter period of time. 

The post Advanced Navigation’s Hydrus explores shipwrecks in the Indian Ocean appeared first on The Robot Report.

]]>
https://www.therobotreport.com/advanced-navigations-hydrus-explores-shipwrecks-indian-ocean/feed/ 0
Bota Systems to showcase its latest sensors at Robotics Summit https://www.therobotreport.com/bota-systems-to-showcase-its-latest-sensors-at-robotics-summit/ https://www.therobotreport.com/bota-systems-to-showcase-its-latest-sensors-at-robotics-summit/#respond Thu, 18 Apr 2024 18:48:10 +0000 https://www.therobotreport.com/?p=578759 Bota Systems will be at Booth 315 on the show floor at the Robotics Summit & Expo, which takes place on May 1 and 2, 2024. 

The post Bota Systems to showcase its latest sensors at Robotics Summit appeared first on The Robot Report.

]]>
Bota Systems.

Bota offers vision systems intended to allow robots to work and move safety. | Source: Bota Systems

Bota Systems will exhibit its recently unveiled sensors featuring a through-hole flange design and enhanced cable management at the Robotics Summit & Expo. The company can be found in Booth 315 on the event’s show floor.

“During the Robotics Summit, we will showcase our complete range of sensors at our booth, and we invite you to experience these sensors in action,” Marco Martinaglia, vice president of marketing at Bota Systems, told The Robot Report. “You’ll see a live demonstration of inertia compensation with a handheld device, and a Mecademic Robot equipped with our cutting-edge MiniONE Pro six-axis sensor will perform automated assembly and deburring tasks.”

The company said it designed its latest sensors for humanoids, industrial, and medical robots. It claimed that they can improve functions in fields such as welding and minimally invasive surgeries.

Bota Systems added that its force-torque sensors can give robots a sense of touch, enabling them to accurately and reliably perform tasks that were previously only possible with manual operators.

Bota Systems designs for ease of integration

“We are particularly excited to have just announced the release of our latest sensor, the PixONE,” said Ilias Patsiaouras, co-founder and chief technology officer of Bota Systems.

“The PixONE sensor’s innovative hollow shaft design allows it to be seamlessly integrated between the robot’s arm and the end-of-arm tooling [EOAT], maintaining the integrity of internal cable routing,” he added. “This design is particularly advantageous as many robotic arm manufacturers and OEMs are moving towards internal routing to eliminate cable tangles and motion restrictions.”

Bota Systems is an official distribution and integration partner of Universal Robots and Mecademic.

In October 2023, the company added NEXT Robotics to its distributor network. NEXT is now its official distributor for the German-speaking countries of Germany, Austria, and Switzerland. That same month, Bota Systems raised $2.5 million in seed funding.

See sensors at the Robotics Summit & Expo

“Our vision is to equip robots with the sense of touch, making them not only safer and more user-friendly, but also more collaborative,” Klajd Lika, co-founder and CEO of Bota Systems, told The Robot Report. “We look forward to the Robotics Summit and Expo because it brings together the visionaries and brightest minds of the industry — this interaction is valuable for us to shape the development of our next generation of innovative sensors,” 

This will be the largest Robotics Summit & Expo ever. It will include more than 200 exhibitors, various networking opportunities, a Women in Robotics breakfast, a career fair, an engineering theater, a startup showcase, and more. Registration is now open for the event.

The post Bota Systems to showcase its latest sensors at Robotics Summit appeared first on The Robot Report.

]]>
https://www.therobotreport.com/bota-systems-to-showcase-its-latest-sensors-at-robotics-summit/feed/ 0
March 2024 robotics investments total $642M https://www.therobotreport.com/march-2024-robotics-investments-total-642m/ https://www.therobotreport.com/march-2024-robotics-investments-total-642m/#respond Thu, 18 Apr 2024 14:14:18 +0000 https://www.therobotreport.com/?p=578749 March 2024 robotics funding was buoyed by significant investment into software and drone suppliers.

The post March 2024 robotics investments total $642M appeared first on The Robot Report.

]]>
March 2024 robotics investments fell from the prior month.

Chinese and U.S. companies led March 2024 robotics investments. Credit: Eacon Mining, Dan Kara

Thirty-seven robotics firms received funding in March 2024, pulling in a total monthly investment of $642 million. March’s investment figure was significantly less than February’s mark of approximately $2 billion, but it was in keeping with other monthly investments in 2023 and early 2024 (see Figure 1, below).

March2024 investments dropped from the previous month.

California companies secure investment

As described in Table 1 below, the two largest robotics investments in March were secured by software suppliers. Applied Intuition, a provider of software infrastructure to deploy autonomous vehicles at scale, received a $250 million Series E round, while Physical Intelligence, a developer of foundation models and other software for robots and actuated devices, attracted $70 million in a seed round. Both firms are located in California.

Other California firms receiving substantial rounds included Bear Robotics, a manufacturer of self-driving indoor robots that raised a $60 million Series C round, and unmanned aerial system (UAS) developer Firestorm, whose seed funding was $20 million. For a PDF version of Table 1, click here.

March 2024 robotics investments

CompanyAmount ($)RoundCountryTechnology
Agilis Robotics10,000,000Series AChinaSurgical/interventional systems
AloftEstimateOtherU.S.Drones, data acquisition / processing / management
Applied Intuition250,000,000Series EU.S.Software
Automated Architecture3,280,000EstimateU.K.Micro-factories
Bear RoboBear Roboticstics60,000,000Series CU.S.Indoor mobile platforms
BIOBOT Surgical18,000,000Series BSingaporeSurgical systems
Buzz Solutions5,000,000OtherU.S.Drone inspection
Cambrian Robotics3,500,000SeedU.K.Machine vision
Coctrl13,891,783Series BChinaSoftware
DRONAMICS10,861,702GrantU.K.Drones
Eacon Mining41,804,272Series CChinaAutonomous transportation, sensors
ECEON RoboticsEstimatePre-seedGermanyAutonomous forklifts
ESTAT AutomationEstimateGrantU.S.Actuators / motors / servos
Fieldwork Robotics758,181GrantU.K.Outdoor mobile manipulation platforms, sensors
Firestorm Labs20,519,500SeedU.S.Drones
Freespace RoboticsEstimateOtherU.S.Automated storage and retrieval systems
Gather AI17,000,000Series AU.S.Drones, software
Glacier7,700,000OtherU.S.Articulated robots, sensors
IVY TECH Ltd.421,435GrantU.K.Outdoor mobile platforms
KAIKAKUEstimatePre-seedU.K.Collaborative robots
KEF RoboticsEstimateGrantU.S.Drone software
Langyu RobotEstimateOtherChinaAutomated guided vehicles, software
Linkwiz2,679,725OtherJapanSoftware
MotionalEstimateSeedU.S.Autonomous transportation systems
Orchard Robotics3,800,000Pre-seedU.S.Crop management
Pattern Labs8,499,994OtherU.S.Indoor and outdoor mobile platforms
Physical Intelligence70,000,000SeedU.S.Software
PiximoEstimateGrantU.S.Indoor mobile platforms
Preneu11,314,492Series BKoreaDrones
QibiTech5,333,884OtherJapanSoftware, operator services, uncrewed ground vehicles
Rapyuta RoboticsEstimateOtherJapanIndoor mobile platforms, autonomous forklifts
RIOS Intelligent Machines13,000,000Series BU.S.Machine vision
RITS13,901,825Series AChinaSensors, software
Robovision42,000,000OtherBelgiumComputer vision, AI
Ruoyu Technology6,945,312SeedChinaSoftware
Sanctuary Cognitive SystemsEstimateOtherCanadaHumanoids / bipeds, software
SeaTrac Systems899,955OtherU.S.Uncrewed surface vessels
TechMagic16,726,008Series CJapanArticulated robots, sensors
Thor PowerEstimateSeedChinaArticulated robots
Viam45,000,000Series BGermanySmart machines
WIRobotics9,659,374Series AS. KoreaExoskeletons, consumer, home healthcare
X SquareEstimateSeedU.S.Software
YindatongEstimateSeedChinaSurgical / interventional systems
Zhicheng PowerEstimateSeries AChinaConsumer / household
Zhongke HuilingEstimateSeedChinaHumanoids / bipeds, microcontrollers / microprocessors / SoC

Drones get fuel for takeoff in March 2024

Providers of drones, drone technologies, and drone services also attracted substantial individual investments in March 2024. Examples included Firestorm and Gather AI, a developer of inventory monitoring drones whose Series A was $17 million.

In addition, drone services provider Preneu obtained $11 million in Series B funding, and DRONAMICS, a developer of drone technology for cargo transportation and logistics operations, got a grant worth $10.8 million.

Companies in U.S. and China received the majority of the March 2024 funding, at $451 million and $100 million, respectively (see Figure 2, below).

Companies based in Japan and the U.K. were also well represented among the March 2024 investment totals. Four companies in Japan secured a total of $34.7 million, while an equal number of firms in the U.K. attracted $13.5 million in funding.

 

March 2024 robotics investment by country.

Nearly 40% of March’s robotics investments came from a single Series E round — that of Applied Intuition. The remaining funding classes were all represented in March 2024 (Figure 3, below).

March 2024 robotics funding by type and amounts.

Editor’s notes

What defines robotics investments? The answer to this simple question is central in any attempt to quantify them with some degree of rigor. To make investment analyses consistent, repeatable, and valuable, it is critical to wring out as much subjectivity as possible during the evaluation process. This begins with a definition of terms and a description of assumptions.

Investors and investing

Investment should come from venture capital firms, corporate investment groups, angel investors, and other sources. Friends-and-family investments, government/non-governmental agency grants, and crowd-sourced funding are excluded.

Robotics and intelligent systems companies

Robotics companies must generate or expect to generate revenue from the production of robotics products (that sense, analyze, and act in the physical world), hardware or software subsystems and enabling technologies for robots, or services supporting robotics devices. For this analysis, autonomous vehicles (including technologies that support autonomous driving) and drones are considered robots, while 3D printers, CNC systems, and various types of “hard” automation are not.

Companies that are “robotic” in name only, or use the term “robot” to describe products and services that do not enable or support devices acting in the physical world, are excluded. For example, this includes “software robots” and robotic process automation. Many firms have multiple locations in different countries. Company locations given in the analysis are based on the publicly listed headquarters in legal documents, press releases, etc.

Verification

Funding information is collected from several public and private sources. These include press releases from corporations and investment groups, corporate briefings, market research firms, and association and industry publications. In addition, information comes from sessions at conferences and seminars, as well as during private interviews with industry representatives, investors, and others. Unverifiable investments are excluded and estimates are made where investment amounts are not provided or are unclear.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The post March 2024 robotics investments total $642M appeared first on The Robot Report.

]]>
https://www.therobotreport.com/march-2024-robotics-investments-total-642m/feed/ 0
Project CETI develops robotics to make sperm whale tagging more humane https://www.therobotreport.com/project-ceti-robotics-make-sperm-whale-tagging-more-humane/ https://www.therobotreport.com/project-ceti-robotics-make-sperm-whale-tagging-more-humane/#respond Sun, 14 Apr 2024 12:00:50 +0000 https://www.therobotreport.com/?p=578695 Project CETI is using robotics, machine learning, biology, linguistics, natural language processing, and more to decode whale communications. 

The post Project CETI develops robotics to make sperm whale tagging more humane appeared first on The Robot Report.

]]>
Sperm whales in the ocean.

Project CETI is a nonprofit scientific and conservation initiative that aims to decode whale communications. | Source: Project CETI

Off the idyllic shores of Dominica, a country in the Caribbean, hundreds of sperm whales gather deep in the sea. While their communication sounds like a series of clicks and creaks to the human ear, these whales have unique, regional dialects and even accents. A multidisciplinary group of scientists, led by Project CETI, is using soft robotics, machine learning, biology, linguistics, natural language processing, and more to decode their communications. 

Founded in 2020, Project CETI, or the Cetacean Translation Initiative, is a nonprofit organization dedicated to listening to and translating the communication systems of sperm whales. The team is using specially created tags that latch onto whales and gather information for the team to decode. Getting these tags to stay on the whales, however, is no easy task. 

“One of our core philosophies is we could never break the skin. We can never draw blood. These are just our own, personal guidelines,” David Gruber, the founder and president of Project CETI, told The Robot Report

“[The tags] have four suction cups on them,” he said. “On one of the suction cups is a heart sensor, so you can get the heart rate of the whale. There’s also three microphones on the front of it, so you hear the whale that it’s on, and you can know the whales that’s around it and in front of it.

“So you’ll be able to know from three different microphones the location of the whales that are speaking around it,” explained Gruber. “There’s a depth sensor in there, so you can actually see when the whale was diving and so you can see the profiles of it going up and down. There’s a temperature sensor. There’s an IMU, and it’s like a gyroscope, so you can know the position of the whale.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Finding a humane way to tag whales

One of the core principles of Project CETI, according to Gruber, is to use technology to bring people closer to animals. 

“There was a quote by Stephen Hawking in a BBC article, in which he posited that the full development of AI and robotics would lead to the extinction of the human race,” Gruber said. “And we thought, ‘This is ridiculous, why would scientists develop something that would lead to our own extinction?’ And it really inspired us to counter this narrative and be like, ‘How can we make robots that are actually very gentle and increase empathy?’”

“In order to deploy those tags onto whales, what we needed was a form of gentle, stable, reversible adhesion,” Alyssa Hernandez, a functional morphologist, entomologist, and biomechanist on the CETI team, told The Robot Report. “So something that can be attached to the whale, where it would go on and remain on the whale for a long amount of time to collect the data, but still be able to release itself eventually, whether naturally by the movements of the whale, or by our own mechanism of sort of releasing the tag itself.”

This is what led the team to explore bio-inspired techniques of adhesion. In particular, the team settled on studying suction cups that are common in marine creatures. 

“Suction discs are pretty common in aquatic systems,” said Hernandez. “They show up in multiple groups of organisms, fish, cephalopods, and even aquatic insects. And there are variations often on each of these discs in terms of the morphology of these discs, and what elements these discs have.”

Hernandez was able to draw on her biology background to design suction-cup grippers that would work particularly well on sperm whales that are constantly moving through the water. This means the suction cup would have to withstand changing pressures and forces. They can stay on a whale’s uneven skin even when it’s moving. 

“In the early days, when we first started this project, the question was, ‘Would the soft robots even survive in the deep sea?’” said Gruber. 

Project CETI.

An overview of Project CETI’s mission. | Source: Project CETI

How suction cup shape changes performance

“We often think of suction cups as round, singular material elements, and in biology, that’s not usually the case,” noted Hernandez. “Sometimes these suction disks are sort of elongated or slightly different shaped, and oftentimes they have this sealing rim that helps them keep the suction engaged on rough surfaces.”

Hernandez said the CETI team started off with a standard, circular suction cup. Initially, the researchers tried out multiple materials and combinations of stiff backings and soft rims. Drawing on her biology experience, Hernandez began to experiment with more elongated, ellipse shapes. 

“I often saw [elongated grippers] when I was in museums looking at biological specimens or in the literature, so I wanted to look at an ellipse-shaped cup,” Hernandez said. “So I ended up designing one that was a medium-sized ellipse, and then a thinner ellipse as well. Another general design that I saw was more of this teardrop shape, so smaller at one end and wider at the base.” 

Hernadez said the team also looked at peanut-shaped grippers. In trying these different shapes, she looked for one that would provide increased resistance over the more traditional circular suction cups. 

“We tested [the grippers] on different surfaces of different roughness and different compliance,” recalled Hernandez. “We ended up finding that compared to the standard circle, and variations of ellipses, this medium-sized ellipse performed better under shear conditions.” 

She said the teardrop-shaped gripper also performed well in lab testing. These shapes performed better because, unlike a circle, they don’t have a uniform stiffness throughout the cup, allowing them to bend with the whale as it moves. 

“Now, I’ve modified [the suction cups] a bit to fit our tag that we currently have,” Hernandez said. “So, I have some versions of those cups that are ready to be deployed on the tags.”

Project CETI boat with people interacting with drones.

Project CETI uses drones to monitor sperm whale movements and to place the tags on the whales. | Source: Project CETI

Project CETI continues iterating

The Project CETI team is actively deploying its tags using a number of methods, including having biologists press them onto whales using long poles, a method called pole tagging, and using drones to press the tags onto the whales. 

Once they’re on the whale, they stay on for anywhere from a few hours to a few days. Once they fall off, the CETI team has a mechanism that allows them to track the tags down and pull all of the gathered data off of them. CETI isn’t interested in making tags that can stay on the whales long-term, because sperm whales can travel long distances in just a few days, and it could hinder their ability to track the tags down once they fall off. 

The CETI team said it plans to continue iterating on the suction grippers and trying new ways to gently get crucial data from sperm whales. It’s even looking into tags that would be able to slightly crawl to different positions on the whale to gather information about what the whale is eating, Gruber said. The team is also interested in exploring tags that could recharge themselves. 

“We’re always continuing to make things more and more gentle, more and more innovative,” said Gruber. “And putting that theme forward of how can we be almost invisible in this project.”

The post Project CETI develops robotics to make sperm whale tagging more humane appeared first on The Robot Report.

]]>
https://www.therobotreport.com/project-ceti-robotics-make-sperm-whale-tagging-more-humane/feed/ 0
NEURA and Omron Robotics partner to offer cognitive factory automation https://www.therobotreport.com/neura-omron-robotics-partner-offer-cognitive-factory-automation/ https://www.therobotreport.com/neura-omron-robotics-partner-offer-cognitive-factory-automation/#respond Thu, 04 Apr 2024 12:55:34 +0000 https://www.therobotreport.com/?p=578518 NEURA Robotics and Omron Robotics and Safety Technologies say their strategic alliance will make cognitive systems 'plug and play.'

The post NEURA and Omron Robotics partner to offer cognitive factory automation appeared first on The Robot Report.

]]>
NEURA Robotics lab.

NEURA has developed cognitive robots in a variety of form factors. Source: NEURA Robotics

Talk about combining robotics and artificial intelligence is all the rage, but some convergence is already maturing. NEURA Robotics GmbH and Omron Robotics and Safety Technologies Inc. today announced a strategic partnership to introduce “cognitive robotics” into manufacturing.

“By pooling our sensor and AI technologies and expertise into an ultimate platform approach, we will significantly shape the future of the manufacturing industry and set new standards,” stated David Reger, founder and CEO of NEURA Robotics.

Reger founded the company in 2019 with the intention of combining sensors and AI with robotics components for a platform for app development similar to that of smartphones. The “NEURAverse” offers flexibility and cost efficiency in automation, according to the company.

“Unlike traditional industrial robots, cognitive robots have the ability to learn from their environment, make decisions autonomously, and adapt to dynamic production scenarios,” said Metzingen, Germany-based NEURA. “This opens new application possibilities including intricate assembly tasks, detailed quality inspections, and adaptive material handling processes.”

Omron has sensor, channel expertise

“We see NEURA’s cognitive technologies as a compelling growth opportunity for industrial robotics,” added Olivier Welker, president and CEO of Omron Robotics and Safety Technologies. “By combining NEURA’s innovative solutions with Omron’s global reach and automation portfolio, we will provide customers new ways to increase safety, productivity, and flexibility in their operations.”

Pleasanton, Calif.-based Omron Robotics is a subsidiary of OMRON Corp. focusing on automation and safety sensing. It designs and manufactures industrial, collaborative, and mobile robots for various industries.

“We’ve known Omron for quite some time, and even before I started NEURA, we had talked about collaborating,” Reger told The Robot Report. “They’ve tested our products, and we’ve worked together on how to benefit both sides.”

“We have the cognitive platform, and they’re one of the biggest sensor, controllers, and safety systems providers,” he added. “This collaboration will integrate our cognitive abilities and NEURAverse with their sensors for a plug-and-play solution, which everyone is working toward.”

Omron Robotics' Olivier Welker and NEURA's David Reger.

Omron Robotics’ Olivier Welker and NEURA’s David Reger celebrate their partnership. Source: NEURA

Collaboration has ‘no limits’

When asked whether NEURA and Omron Robotics’ partnership is mainly focused on market access, Reger replied, “It’s not just the sales channel … there are no really big limits. From both sides, there will be add-ons.”

Rather than see each other as competitors, NEURA and Omron Robotics are working to make robots easier to use, he explained.

“As a billion-dollar company, it could have told our startup what it wanted, but Omron is different,” said Reger. “I felt we got a lot of respect from Olivier and everyone in that organization. It won’t be a one-sided thing; it will be just ‘Let’s help each other do something great.’ That’s what we’re feeling every day since we’ve been working together. Now we can start talking about it.”

NEURA has also been looking at mobile manipulation and humanoid robots, but adding capabilities to industrial automation is the “low-hanging fruit, where small changes can have a huge effect,” said Reger. “A lot of things for humanoids have not yet been solved.”

“I would love to just work on household robots, but the best way to get there is to use the synergy between industrial robotics and the household market,” he noted. “Our MAiRA, for example, is a cognitive robot able to scan an environment and from an idle state pick any known or unknown objects.”

MAiRA cognitive robot on MAV mobile base.

MAiRA cognitive robot on MAV mobile base. Source: NEURA Robotics

Ease of use drives NEURA strategy

NEURA and Omron Robotics promise to make robots easier to use, helping overall adoption, Reger said.

“A big warehouse company out of the U.S. is claiming that it’s already using more than 1 million robots, but at the same time, I’m sure they’d love to use many more robots,” he said. “It’s also in the transformation from a niche market into a mass market. We see that’s currently only possible if you somehow control the environment.”

“It’s not just putting all the sensors inside the robot, which we were first to do, and saying, ‘OK, now we’re able to interact with a human and also pick objects,'” said Reger. “Imagine there are external sensors, but how do you calibrate them? To make everything plug and play, you need new interfaces, which means collaboration with big players like Omron that provide a lot of sensors for the automation market.”

NEURA has developed its own sensors and explored the balance of putting processing in the cloud versus the edge. To make its platform as popular with developers as that of Apple, however, the company needs the support of partners like Omron, he said.

Reger also mentioned NEURA’s partnership with Kawasaki, announced last year, in which Kawasaki offers the LARA CL series cobot with its portfolio. “Both collaborations are incredibly important for NEURA and will soon make sense to everyone,” he said.

NEURA to be at Robotics Summit & Expo

Reger will be presenting a session on “Developing Cognitive Robotics Systems” at 2:45 p.m. EDT on Wednesday, May 1, Day 1 of the Robotics Summit & Expo. The event will be at the Boston Convention and Exhibition Center, and registration is now open.

“I’ll be talking about making robots cognitive to enable AI to be useful to humanity instead of competing with us,” he said. “AI is making great steps, but if you look at what it’s doing, like drawing pictures or writing stories — these are things that I’d love to do but don’t have the time for. But if I ask, let’s say, AI to take out the garbage or show it a picture of garbage, it can tell me how to do it, but it’s simply not able to do something about it yet.”

NEURA is watching humanoid development but is focusing on integrating cognitive robotics with sensing and wearables as it expands in the U.S., said Reger. The company is planning for facilities in Detroit, Boston, and elsewhere, and it is looking for leadership team members as well as application developers and engineers.

“We don’t just want a sales office, but also production in the U.S.,” he said. “We have 220 people in Germany — I just welcomed 15 new people who joined NEURA — and are starting to build our U.S. team. In the past several months, we’ve gone with only European and American investors, and we’re looking at the Japanese market. The U.S. is now open to innovation, and it’s an exciting time for us to come.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


The post NEURA and Omron Robotics partner to offer cognitive factory automation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/neura-omron-robotics-partner-offer-cognitive-factory-automation/feed/ 0
Stealthy startup Mendaera is developing a fist-sized medical robot with Dr. Fred Moll’s support https://www.therobotreport.com/mendaera-developing-fist-sized-medical-robot-with-dr-fred-moll-support/ https://www.therobotreport.com/mendaera-developing-fist-sized-medical-robot-with-dr-fred-moll-support/#respond Fri, 22 Mar 2024 14:43:38 +0000 https://www.therobotreport.com/?p=578241 Mendaera is working on medical technology that combines robotics, AI, and real-time imaging in a compact device.

The post Stealthy startup Mendaera is developing a fist-sized medical robot with Dr. Fred Moll’s support appeared first on The Robot Report.

]]>
Mendaera logo.

Editor’s Note: This article was syndicated from The Robot Report’s sister site Medical Design & Outsourcing.

The veil is starting to lift on medical robotics startup Mendaera Inc. as it exits stealth mode and heads toward regulatory submission with a design freeze on its first system and verification and validation imminent.

Two former Auris Health leaders co-founded the San Mateo, Calif.-based company. Mendaera also has financial support from Dr. Fred Moll, the Auris and Intuitive Surgical co-founder who is known as “the father of robotic surgery.”

“Among the innovators in the field, Mendaera’s efforts to make robotics commonplace earlier in the healthcare continuum are unique and can potentially change the future of care delivery,” stated Moll in a release.

But Mendaera isn’t a surgical robotics developer. Instead, it said it is working on technology that combines robotics, artificial intelligence, and real-time imaging in a compact device “no bigger than your fist” for procedures including percutaneous instruments.

Mendaera co-founder and CEO Josh DeFonzo.

Mendaera co-founder and CEO Josh DeFonzo. | Source: Mendaera

Josh DeFonzo, co-founder and CEO of Mendaera, offered new details about his startup’s technology and goals in an exclusive interview, as he announced the acquisition of operating room telepresence technology that Avail Medsystems developed.

Avail, which shut down last year, was founded by former Intuitive Surgical and Shockwave Medical leader Daniel Hawkins, who’s now CEO at MRI automation software startup Vista.ai

“We’re a very different form factor of robot that focuses on what I’ll describe as gateway procedures,” DeFonzo said. “It’s a different category of robots that we don’t believe the market has seen before [as] we’re designing and developing it.”

Those procedures include vascular access for delivery of devices or therapeutic agents; access to organs for surgical or diagnostics purposes; and pain management procedures such as regional anesthesia, neuraxial blocks, and chronic pain management. DeFonzo declined to go into much detail about specific procedures because the product is still in the development stage.

“The procedures that we are going after are those procedures that involve essentially a needle or a needle-like device and real-time imaging, and as such, there are specific procedures that we think the technology will perform very well at,” he said. “However, the technology is also designed to be able to address any suite of procedures that use those two common denominators: real-time imaging and a percutaneous instrument.”

“And the reason that’s an important point to make is that oftentimes, when you are a specialist who performs these procedures, you don’t perform just one,” added DeFonzo. “You perform a number of procedures: central venous catheters [CVCs], peripherally inserted central catheter [PICC] lines, regional anesthetic blocks that are in the interscalene area or axial blocks. The technology is really designed to enable specialists — of whom there are many — the ability to perform these procedures more consistently with a dramatically lower learning curve.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


Mendaera marks progress to date

Preclinical testing has shown the technology has improved accuracy and efficiency in comparison with freehand techniques, regardless of the individual’s skill level, asserted DeFonzo. User research spanned around 1,000 different healthcare providers ranging from emergency medicine and interventional radiology to licensed medical doctors, nurse practitioners, and physician’s assistants.

“It seems to be very stable across user types,” he said. “So whether somebody is a novice, of intermediate skill level, or advanced, the robot is a great leveler in terms of being able to provide consistent outcomes.”

“Whereas when you look at the same techniques performed freehand, the data generally tracks with what you would expect: lesser skilled people are less accurate; more experienced people are more accurate,” DeFonzo noted. “But even in that most skilled category, we do find that the robot makes a fairly remarkable improvement on accuracy and timeliness of intervention.”

Last year, the startup expanded into a production facility to accommodate growth and volume manufacturing for the product’s launch and said its system will be powered by handheld ultrasound developer Butterfly Network’s Ultrasound-on-Chip technology.

Butterfly Network won FDA clearance in 2017 for the Butterfly iQ for iPhone. | Source: Butterfly Network

Mendaera’s aim is to eventually deploy these systems “to the absolute edge of healthcare,” starting with hospitals, ambulatory surgical centers and other procedural settings, said DeFonzo. The company will then push to alternative care sites and primary care clinics as evidence builds to support the technology.

“The entire mission for the company is to ensure essentially that high-quality intervention is afforded to every patient at every care center at every encounter,” he said. “We want to be able to push that as far to the edge of healthcare as possible, and that’s certainly something we aim to do over time, but it’s not our starting point explicitly.”

“As a practical starting point, however, we do see ourselves working in the operating room, in the interventional radiology suite, and likely in cath labs to facilitate these gateway procedures, the access that is afforded adjacent to a larger intervention,” DeFonzo acknowledged.

Mendaera said it expects to submit its system to the U.S. Food and Drug Administration for review through the 510(k) pathway by the end of 2024 with the goal of offering the product clinically in 2025.

“What we really want to do with this technology is make sure that we’re leveraging not just technological trends, but really important forces in the space — robotics, imaging and AI — to dramatically improve access to care,” said DeFonzo. “Whether you’re talking about something as basic as a vascular access procedure or something as complex as transplant surgery or neurosurgery, we need to leverage technology to improve patient experience.”

“We need to leverage technology to help hospitals become more financially sustainable, ultimately improving the healthcare system as we do it,” he said. “So our vision was to utilize technology to provide solutions that aggregate across many millions, if not tens and hundreds of millions, of procedures to make a ubiquitous technology that really helps benefit our healthcare system.”

Mendaera’s research and development group will work with employees from Avail on how to best add the telepresence technology to the mix.

“We see a lot of power in what the Avail team has built,” DeFonzo said. “Bringing that alongside robotic technology, our imaging partnerships and AI, we think that we’ve got a really good opportunity to digitize to a further extent not only expertise in the form of the robot, but [also] clinical judgment, like how do you ensure that the right clinician and his or her input is present ahead of technologies like artificial intelligence that hopefully augment all users in an even more scalable way.”

The post Stealthy startup Mendaera is developing a fist-sized medical robot with Dr. Fred Moll’s support appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mendaera-developing-fist-sized-medical-robot-with-dr-fred-moll-support/feed/ 0
Bota Systems launches upgraded force-torque sensor for small cobots https://www.therobotreport.com/bota-systems-launches-upgraded-force-torque-sensor-for-small-cobots/ https://www.therobotreport.com/bota-systems-launches-upgraded-force-torque-sensor-for-small-cobots/#respond Fri, 22 Mar 2024 13:59:14 +0000 https://www.therobotreport.com/?p=578238 Bota Systems' latest multi-axis sensor provides a sensitivity level three to five times higher than the current SensONE sensor. 

The post Bota Systems launches upgraded force-torque sensor for small cobots appeared first on The Robot Report.

]]>
Bota Systems' SensONE T5 force-torque sensor on a UR cobot. The company is an official distribution and integration partner of Universal Robots and Mecademic.

The SensONE T5 force-torque sensor on a UR collaborative robot. | Source: Bota Systems

Bota Systems AG has launched the SenseONE T5, a high-sensitivity version of its SensONE multi-axis force-torque sensor. The company said its latest sensor provides a sensitivity level of 0.05 N / 0.002 Nm, which is three to five times higher than its predecessor.

Zurich-based Bota Systems said it built the SenseONE T5 for collaborative robots with small payloads of up to 11.02 lb. (5 kg). The compact and lightweight sensor offers optimal sensitivity for small robots, according to the company

“This new force-torque sensor’s excellent sensitivity opens up exciting new possibilities for collaborative small-payload robots, which are used for performing highly sensitive applications,” said Ilias Patsiaouras, co-founder and chief technology officer of Bota Systems, in a release. “The SensONE T5 will find its niche in end-of-line quality testing of small parts, such as buttons in electronics, as well as precision assembly of highly detailed, delicate tasks, such as the routing and installation of electric cables into cabinets.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


SenseONE T5 designed for ease of integration

A robotic force-torque sensor is a device that simultaneously measures force and torque that is applied to a surface. The measured output signals are used for real-time feedback control, thus enabling cobots to perform challenging human-machine interaction tasks, explained Bota Systems.

It added that the sensor most commonly used for such robotic applications is a six-axis force-torque sensor, which measures the force and torque on all three axes. Bota Systems said it designed its latest system for challenging applications.

The SenseONE T5 comes in a dustproof, water-resistant, and compact package. The company claimed that it is easy to integrate into a robotic arm and requires no mounting adapter. 

Temperature drift on the sensor is negligible, and the new sensor provides accuracy exceeding 2% with a sampling rate of up to 2,000 Hz, said Bota. The sensor is available in two communications options: Serial USB/RS422 and EtherCat. It comes with smooth TwinCAT, URcap, ROS, LabVIEW, and MATLAB software integration, according to the company.

Bota Systems' SensONE T5 force-torque sensor.

The SensONE T5 force-torque sensor can be integrated into robotic arms without a mounting adapter. | Source: Bota Systems

See Bota Systems at the Robotics Summit & Expo

Bota Systems is an official distribution and integration partner of Universal Robots and Mecademic. In October 2023, the company added NEXT Robotics to its distributor network.

NEXT is now its official distributor for the German-speaking countries of Germany, Austria, and Switzerland. That same month, Bota Systems raised $2.5 million in seed funding. 

Marathon Venture Capital led the round, along with participation from angel investors. Bota Systems said it plans to use the funding to grow its team to address increasing demand by leading research labs and manufacturing companies. It also plans to accelerate its product roadmap.

To learn more about Bota Systems, visit it at Booth 315 at the Robotics Summit & Expo, which will be held on May 1 and 2 in Boston.

This will be the largest Robotics Summit ever. It will include more than 200 exhibitors, various networking opportunities, a women in robotics breakfast, a career fair, an engineering theater, a startup showcase, and more. Registration is now open for the event.

The post Bota Systems launches upgraded force-torque sensor for small cobots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/bota-systems-launches-upgraded-force-torque-sensor-for-small-cobots/feed/ 0
AMD announces Embedded+ architecture to accelerate edge AI https://www.therobotreport.com/amd-announces-embedded-architecture-to-accelerate-edge-ai/ https://www.therobotreport.com/amd-announces-embedded-architecture-to-accelerate-edge-ai/#respond Tue, 06 Feb 2024 14:00:30 +0000 https://www.therobotreport.com/?p=577788 AMD Embedded+ combines embedded processors with adaptive systems on chips to shorten edge AI time to market.

The post AMD announces Embedded+ architecture to accelerate edge AI appeared first on The Robot Report.

]]>
AMD's new Embedded+ architecture for high-performance compute.

The new AMD Embedded+ architecture for high-performance compute. Source: Advanced Micro Devices

Robots and other smart devices need to process sensor data with a minimum of delay. Advanced Micro Devices Inc. today launched AMD Embedded+, a new computing architecture that combines AMD Ryzen Embedded processors with Versal adaptive systems on chips, or SoCs. The single integrated board is scalable and power-efficient and can accelerate time to market for original design manufacturer, or ODM, partners, said the company.

“In automated systems, sensor data has diminishing value with time and must operate on the freshest information possible to enable the lowest-latency, deterministic response,” stated Chetan Khona, senior director of industrial, vision, healthcare, and sciences markets at AMD, in a release.

“In industrial and medical applications, many decisions need to happen in milliseconds,” he noted. “Embedded+ maximizes the value of partner and customer data with energy efficiency and performant computing that enables them to focus in turn on addressing their customer and market needs.”

For more than 50 years, AMD said it has innovated in high-performance computing, graphics, and visualization technologies. The Santa Clara, Calif.-based company claimed that Fortune 500 businesses, billions of people, and research institutions around the world rely on its technology daily.

In the two years since it acquired Xilinx, AMD said it has seen increasing demand for AI in industrial/manufacturing, medical/surgical, smart-city infrastructure, and automotive markets. Not only can Embedded+ support video codecs and AI inferencing, but the combination of Ryzen and Versal can enable real-time control of robot arms, Khona said.

“Diverse sensor data is relied upon more than ever before, across applications,” said Khona in a press briefing last week. “The question is how to get sensor data from autonomous systems into a PC if it isn’t on a USB or some consumer interface.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


AMD Embedded+ paves a path to sensor fusion 

“The market for bringing processing closer to the sensor is growing rapidly,” said Khona. The use cases for embedded AI are growing, with the machine vision market growing to $600 million and sensor data analysis to $1.4 billion by 2028, he explained.

“AMD makes the path to sensor fusion, AI inferencing, industrial networking, control, and visualization simpler with this architecture and ODM partner products,” Khona said. He described the single mother board as usable with multiple types of sensors, allowing for offloaded processing and situational awareness.

AMD said it has validated the Embedded+ integrated compute platform to help ODM customers reduce qualification and build times without needing to expend additional hardware or research and development resources. The architecture enables the use of a common software platform to develop designs with low power, small form factors, and long lifecycles for medical, industrial, and automotive applications, it said.

The company asserted that Embedded+ is the first architecture to combine AMD x86 compute with integrated graphics and programmable I/O hardware for critical AI-inferencing and sensor-fusion applications. “Adaptive computing excels in deterministic, low-latency processing, whereas AI Engines improve high performance-per-watt inferencing,” said AMD.

Ryzen Embedded processors, which contain high-performance Zen cores and Radeon graphics, also offer rendering and display options for an enhanced 4K multimedia experience. In addition, it includes a built-in video codec for 4K H.264/H.265 encode/decode.

The combination of low-latency processing and high performance-per-watt inferencing enables high performance for tasks such as integrating adaptive computing in real time with flexible I/O, AI Engines for inferencing, and AMD Radeon graphics, said AMD.

It added that the new system combines the best of each technology. Embedded+ enables 10GigE vision and CoaXpress connectivity to camera via SFP+, said AMD, and image pre-processing occurs at pixel clock rates. This is especially important for mobile robot navigation, said Khona.

Sapphire delivers first Embedded+ ODM system

Embedded+ also allows system designers to choose from an ecosystem of ODM board offerings based on the architecture, said AMD. They can use it to scale their product portfolios to deliver performance and power profiles best suited to customers’ target applications, it asserted.

Sapphire Technology has built the first ODM system with the Embedded+ architecture, the Sapphire Edge+ VPR-4616-MB, a low-power Mini-ITX form factor motherboard. It offers the full suite of capabilities in as low as 30W of power by using the Ryzen Embedded R2314 processor and Versal AI Edge VE2302 Adaptive SoC.

The Sapphire Edge+ VPR-4616-MB is also available in a full system, including memory, storage, power supply, and chassis. Versal is a programmable network on a chip that can be tuned for power or performance, said AMD. With Ryzen, it provides programmable logic for sensor fusion and real-time controls, it explained.

“By working with a compute architecture that is validated and reliable, we’re able to focus our resources to bolster other aspects of our products, shortening time to market and reducing R&D costs,” said Adrian Thompson, senior vice president of global marketing at Sapphire Technology. “Embedded+ is an excellent, streamlined platform for building solutions with leading performance and features.”

The Embedded+ qualified VPR-4616-MB from Sapphire Technology is now available for purchase.

The post AMD announces Embedded+ architecture to accelerate edge AI appeared first on The Robot Report.

]]>
https://www.therobotreport.com/amd-announces-embedded-architecture-to-accelerate-edge-ai/feed/ 0
KettyBot Pro will provide personalized customer service, says Pudu Robotics https://www.therobotreport.com/kettybot-pro-will-provide-personalized-customer-service-says-pudu-robotics/ https://www.therobotreport.com/kettybot-pro-will-provide-personalized-customer-service-says-pudu-robotics/#respond Wed, 31 Jan 2024 14:00:53 +0000 https://www.therobotreport.com/?p=577701 KettyBot Pro's new features include a larger screen for personalized advertising, cameras for navigation, and smart tray inspection.

The post KettyBot Pro will provide personalized customer service, says Pudu Robotics appeared first on The Robot Report.

]]>
KettyBot Pro is designed for multiple functions.

KettyBot Pro is designed for multiple functions. Source: Pudu Robotics

Pudu Technology Co. today launched KettyBot Pro, the newest generation of its delivery and reception robot. The service robot is designed to address labor shortages in the retail and restaurant industries and enhance customer engagement, said the company.

“In addition to delivering food and returning items, KettyBot can attract, greet, and guide customers in dynamic environments while generating advertising revenue, reducing overhead, and enhancing the in-store experience,” stated Shenzhen, China-based Pudu.

“We hear from various businesses that it’s hard to maintain adequate service levels due to staff being overwhelmed and stretched thin,” said Felix Zhang, founder and CEO of Pudu Robotics, in a release. “Robots like KettyBot Pro lend a helping hand by collaborating with human staff, improving their lives by taking care of monotonous tasks so that they can focus on more value-added services like enhancing customer experience. And people love that you can talk to it.”


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


KettyBot Pro designed to step up service

KettyBot Pro can enhance the customer experience with artificial intelligence-enabled voice interaction, said Pudu Robotics. The mobile robot also has autonomous path planning.

The company said the latest addition to its fleet of commercial service robots includes the following new features:

  • Passability upgrade: A new RGBD depth camera — with an ultra-wide angle that boosts the robot’s ability to detect and avoid objects — reduces KettyBot’s minimum clearance from 55 to 52 cm (21.6 to 20.4 in.) under ideal conditions. This allows the robot to navigate through narrow passageways and operate in busy dining rooms and stores.
  • Smart tray inspection: Pudu claimed that this functionality is “a first in the industry.” The robot uses a fisheye camera above the tray to detect the presence or absence of objects on the tray. Once a customer has picked up their meal, the vision system will automatically recognize the completion of thetask and proceed to the next one without the need for manual intervention.
  • Customization for customers: The integration with PUDU Open Platform allows users to personalize KettyBot Pro’s expressions, voice, and content for easy operation and the creation of differentiated services. In a themed restaurant, the KettyBot Pro can display expressions or play lines associated with relevant characters as it delivers meals. It can also provide personalized welcome messages and greeting services, such as birthday services in star-rated hotels.
  • Mobile advertising display: Through the PUDU Merchant Management Platform, businesses can flexibly edit personalized advertisements, marketing videos, and more. Equipped with an 18.5 in. (38.1 cm) large screen, the KettyBot Pro offers new ways to promote menu updates and market products for restaurant and retail clients.
  • New color schemes: The KettyBot is now available in “Pure Black” in addition to the white and yellow, or the yellow and black color scheme of the original model. Pudu said this variety will will better meet the aesthetic preferences of customers in different industries across global markets. For instance, high-end hotels and business venues regard Pure Black as the premium choice, it said.

Pudu Robotics builds for growth

Founded in 2016, Pudu Robotics said it has shipped nearly 70,000 units in more than 60 countries. Since its launch in 2021, global brands such as KFC, MediaMarkt, Pizza Hut, and Walmart have successfully deployed KettyBot in high-traffic environments. These companies use the robot to deliver orders, market menu items and products, and welcome guests, said Pudu.

With growing healthcare needs and advances in artificial intelligence, the U.S. service robotics market is poised to grow this year, Zhang told The Robot Report.

Pudu Robotics — which reached $100 million in revenue in 2022 — is building two new factories near Shanghai that it said will triple the company’s annual capacity and help it meet global demand.

The post KettyBot Pro will provide personalized customer service, says Pudu Robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/kettybot-pro-will-provide-personalized-customer-service-says-pudu-robotics/feed/ 0
The role of ToF sensors in mobile robots https://www.therobotreport.com/the-role-of-tof-sensors-in-mobile-robots/ https://www.therobotreport.com/the-role-of-tof-sensors-in-mobile-robots/#respond Tue, 23 Jan 2024 17:52:25 +0000 https://www.therobotreport.com/?p=568708 Time-of-flight or ToF sensors provide mobile robots with precise navigation, low-light performance, and high frame rates for a range of applications.

The post The role of ToF sensors in mobile robots appeared first on The Robot Report.

]]>
ToF sensors provide 3D information of the world around a mobile robot, providing important data to the robots perception algorithms. | Credit: E-con Systems

ToF sensors provide 3D information of the world around a mobile robot, providing important data to the robots perception algorithms. | Credit: E-con Systems

In the ever-evolving world of robotics, the seamless integration of technologies promises to revolutionize how humans interact with machines. An example of transformative innovation, the emergence of time-of-flight or ToF sensors is crucial in enabling mobile robots to better perceive the world around them.

ToF sensors have a similar application to lidar technology in that both use multiple sensors for creating depth maps. However, the key distinction lies in these cameras‘ ability to provide depth images that can be processed faster, and they can be built into systems for various applications.

This maximizes the utility of ToF technology in robotics. It has the potential to benefit industries reliant on precise navigation and interaction.

Why mobile robots need 3D vision

Historically, RGB cameras were the primary sensor for industrial robots, capturing 2D images based on color information in a scene. These 2D cameras have been used for decades in industrial settings to guide robot arms in pick-and-pack applications.

Such 2D RGB cameras always require a camera-to-arm calibration sequence to map scene data to the robot’s world coordinate system. 2D cameras are unable to gauge distances without this calibration sequence, thus making them unusable as sensors for obstacle avoidance and guidance.

Autonomous mobile robots (AMRs) must accurately perceive the changing world around them to avoid obstacles and build a world map while remaining localized within that map. Time-of-flight sensors have been in existence since the late 1970s and have evolved to become one of the leading technologies for extracting depth data. It was natural to adopt ToF sensors to guide AMRs around their environments.

Lidar was adopted as one of the early types of ToF sensors to enable AMRs to sense the world around them. Lidar bounces a laser light pulse off of surfaces and measures the distance from the sensor to the surface.

However, the first lidar sensors could only perceive a slice of the world around the robot using the flight path of a single laser line. These lidar units were typically positioned between 4 and 12 in. above the ground, and they could only see things that broke through that plane of light.

The next generation of AMRs began to employ 3D stereo RGB cameras that provide 3D depth information data. These sensors use two stereo-mounted RGB cameras and a “light dot projector” that enables the camera array to accurately view the projected light on the science in front of the camera.

Companies such as Photoneo and Intel RealSense were two of the early 3D RGB camera developers in this market. These cameras initially enabled industrial applications such as identifying and picking individual items from bins.

Until the advent of these sensors, bin picking was known as a “holy grail” application, one which the vision guidance community knew would be difficult to solve.

The camera landscape evolves

A salient feature is the cameras’ low-light performance which prioritizes human-eye safety. The 6 m (19.6 ft.) range in far mode facilitates optimal people and object detection, while the close-range mode excels in volume measurement and quality inspection.

The cameras return the data in the form of a “point cloud.” On-camera processing capability mitigates computational overhead and is potentially useful for applications like warehouse robots, service robots, robotic arms, autonomous guided vehicles (AGVs), people-counting systems, 3D face recognition for anti-spoofing, and patient care and monitoring.

Time-of-flight technology is significantly more affordable than other 3D-depth range-scanning technologies like structured-light camera/projector systems.

For instance, ToF sensors facilitate the autonomous movement of outdoor delivery robots by precisely measuring depth in real time. This versatile application of ToF cameras in robotics promises to serve industries reliant on precise navigation and interaction.

How ToF sensors take perception a step further

A fundamental difference between time-of-flight and RGB cameras is their ability to perceive depth. RGB cameras capture images based on color information, whereas ToF cameras measure the time taken for light to bounce off an object and return, thus rendering intricate depth perception.

ToF sensors capture data to generate intricate 3D maps of surroundings with unparalleled precision, thus endowing mobile robots with an added dimension of depth perception.

Furthermore, stereo vision technology has also evolved. Using an IR pattern projector, it illuminates the scene and compares disparities of stereo images from two 2D sensors – ensuring superior low-light performance.

In comparison, ToF cameras use a sensor, a lighting unit, and a depth-processing unit. This allows AMRs to have full depth-perception capabilities out of the box without further calibration.

One key advantage of ToF cameras is that they work by extracting 3D images at high frame rates — with the rapid division of the background and foreground. They can also function in both light and dark lighting conditions through the use of active lighting components.

In summary, compared with RGB cameras, ToF cameras can operate in low-light applications and without the need for calibration. ToF camera units can also be more affordable than stereo RGB cameras or most lidar units.

One downside for ToF cameras is that they must be used in isolation, as their emitters can confuse nearby cameras. ToF cameras also cannot be used in overly bright environments because the ambient light can wash out the emitted light source.

what is a tof camera illustration.

A ToF sensor is nothing but a sensor that uses time of flight to measure depth and distance. | Credit: E-con Systems

Applications of ToF sensors

ToF cameras are enabling multiple AMR/AGV applications in warehouses. These cameras provide warehouse operations with depth perception intelligence that enables robots to see the world around them. This data enables the robots to make critical business decisions with accuracy, convenience, and speed. These include functionalities such as:

  • Localization: This helps AMRs identify positions by scanning the surroundings to create a map and match the information collected to known data
  • Mapping: It creates a map by using the transit time of the light reflected from the target object with the SLAM (simultaneous localization and mapping) algorithm
  • Navigation: Can move from Point A to Point B on a known map

With ToF technology, AMRs can understand their environment in 3D before deciding the path to be taken to avoid obstacles. 

Finally, there’s odometry, the process of estimating any change in the position of the mobile robot over some time by analyzing data from motion sensors. ToF technology has shown that it can be fused with other sensors to improve the accuracy of AMRs.

About the author

Maharajan Veerabahu has more than two decades of experience in embedded software and product development, and he is a co-founder and vice president of product development services at e-con Systems, a prominent OEM camera product and design services company. Veerabahu is also a co-founder of VisAi Labs, a computer vision and AI R&D unit that provides vision AI-based solutions for their camera customers.

The post The role of ToF sensors in mobile robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/the-role-of-tof-sensors-in-mobile-robots/feed/ 0
Gecko Robotics, Rho Impact study how robots and AI could improve sustainability https://www.therobotreport.com/gecko-robotics-rho-impact-study-how-robots-ai-could-improve-sustainability/ https://www.therobotreport.com/gecko-robotics-rho-impact-study-how-robots-ai-could-improve-sustainability/#respond Sun, 21 Jan 2024 13:30:03 +0000 https://www.therobotreport.com/?p=577546 Automated inspection of critical infrastructure can provide data to help multiple industries, report Gecko Robotics and Rho Impact.

The post Gecko Robotics, Rho Impact study how robots and AI could improve sustainability appeared first on The Robot Report.

]]>
Pipeline inspection robots can improve sustainability and profitability, says Gecko Robotics.

Robotics, data, and AI promise to help both sustainability and profitability. Source: Gecko Robotics

Robots and artificial intelligence have a significant role to play in maintaining crumbling infrastructure — and they could help bring about a zero-carbon economy while they’re at it.

That’s the conclusion of a new report by Gecko Robotics and Rho Impact, which studied how these technologies could reduce the environmental impact of critical infrastructure by bringing them into the digital world.

The potential is massive: The report claimed that digitizing carbon-intensive infrastructure could reduce emissions by a whopping 853 million metric tons (MMT) of CO2 annually. This is the equivalent of taking almost two thirds of gas-powered vehicles in the U.S. off the road, according to Gecko Robotics and Rho Impact.

Gecko identifies five areas for digital transformation

The report looked at five different sectors to demonstrate how digital transformation technologies could help create efficiencies and reduce emissions.

Oil and gas pipelines: Using robots for early detection of corrosion and other damage on components could reduce fugitive emissions, the unintended discharge of gases. AI-powered preventative maintenance programs can help avoid pipeline failure, resulting in a possible 556 MMT CO2e reduction in fugitive emissions, according to the companies.

Pulp and paper industry: Digitizing physical assets in this industry could help prevent corrosion of components. Robotics can be used to identify and address corrosion in paper mill tanks and pressure vessels, and AI can be used to make paper mill boiler operations more efficient.

Not only could digitalization reduce annual emissions by 46 MMT CO2e, per the report, but it could also result in a 6% improvement in emissions efficiency.

Maritime transportation: Digitalization could reduce greenhouse gas emissions by optimizing loads and detecting leaks on large ships, which can be up to 70% more efficient than smaller ones.

The report suggested that robots could inspect these large vessels more efficiently, reducing their time in the repair dock. Shippers could deploy AI to optimize loads. As a result, 11 MMT of CO2e emissions could be prevented by making the most efficient vessels more available.

Bridge infrastructure: Deploying robots to collect inspection and maintenance data, and using AI to analyze it and predict outcomes, could help reduce the time bridges are partially or fully closed for maintenance and repair, said Gecko Robotics and Rho Impact.

Digitalizing the inspection process would generate better data on bridges and could help reduce traffic-related emissions by 10 MMT CO2e, the report claimed.

Data key to addressing climate change, says report

The report concluded that it all comes down to data. Bringing major carbon-emitting industries into the digital world requires a comprehensive and detailed understanding of their infrastructure.

But manual inspection methods can result in limited data that doesn’t adequately identify the critical defects in infrastructural assets. Those assets don’t get the maintenance they need, leading to a shortened lifespan and premature replacement—which can have significant and avoidable business and environmental impacts. 

By contrast, robotic inspections enable operators to collect comprehensive and detailed data, allowing them to prioritize maintenance and repair work that helps make their operations more efficient and extends their lifetime.

Deploying robots and AI as part of a digital transformation strategy makes the task of collecting and gaining insight from that critical data easier than ever. Not only could these technologies help industry meet the challenge of global warming, but they could also help boost their bottom lines.

About the author

Matthew Greenwood is a freelance writer for Engineering.com, a sibling site to The Robot Report. He has a background in strategic communications. He writes about technology, manufacturing, and aerospace.

The post Gecko Robotics, Rho Impact study how robots and AI could improve sustainability appeared first on The Robot Report.

]]>
https://www.therobotreport.com/gecko-robotics-rho-impact-study-how-robots-ai-could-improve-sustainability/feed/ 0
Ansys, NVIDIA team up to test autonomous vehicle sensors https://www.therobotreport.com/ansys-nvidia-team-up-to-test-autonomous-vehicle-sensors/ https://www.therobotreport.com/ansys-nvidia-team-up-to-test-autonomous-vehicle-sensors/#respond Sat, 13 Jan 2024 21:00:14 +0000 https://www.therobotreport.com/?p=577484 Integration will enable engineers to simulate accurate camera, lidar, radar and thermal camera sensors to train autonomous vehicles.

The post Ansys, NVIDIA team up to test autonomous vehicle sensors appeared first on The Robot Report.

]]>
a look at ansys sensors being tested on autonomous vehicles.

Ansys AVxcelerate Sensors will be able to generate simulated sensor data based on scenario-based road conditions made within NVIDIA DRIVE Sim. Credit: Ansys

Ansys AVxcelerate Sensors, the simulation giant’s autonomous vehicle (AV) sensor modeling and testing software, is now available within NVIDIA DRIVE Sim, a scenario-based simulator to develop and test automotive AI. By integrating these technologies, engineers can simulate camera, lidar, radar and thermal camera sensor data to help train and validate ADAS and AV systems.

NVIDIA DRIVE Sim is based on NVIDIA Omniverse, an industrial digitization platform based on universal scene description (OpenUSD) applications.

To meet safety standards, the AI within ADAS and AV systems must be trained and tested on millions of edge cases on billions of miles of roads. It’s not possible to do all these tests in the real world within a reasonable budget or amount of time. As a result, engineers need to use simulated environments to safely test at scale. With the latest integration of Ansys and NVIDIA technology, sensor and software performance can be tested in a digital world to meet these safety requirements.

In other words, engineers will be able to predict the performance of AV sensors, such as camera, lidar, radar and thermal camera sensors, using Ansys AVxcelerate Sensors’ simulations which will gather inputs based on digital worlds created using NVIDIA DRIVE Sim.


SITE AD for the 2024 Robotics Summit registration.Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.


“Perception is crucial for AV systems, and it requires validation through real-world data for the AI to make smart, safe decisions,” said Walt Hearn, senior vice president of worldwide sales and customer excellence at Ansys. “Combining Ansys AVxcelerate Sensors with NVIDIA DRIVE Sim, powered by Omniverse, provides a rich playground for developers to test and validate critical environmental interactions without limitations, paving the way for OEMs to accelerate AV technology development.”

In other autonomous vehicle news, Waymo recently announced it will begin testing its robotaxis on highways in Phoenix without a human safety driver. The rides will be available only to company employees, and their guests, to start. Waymo’s vehicles have been allowed on highways but required a human safety driver in the front seat to handle any issues.

Kodiak Robotics introduced at CES 2024 its sixth-generation, driverless-ready semi truck. The company said its self-driving truck is designed for scaled deployment and builds on five years of real-world testing. This testing included 5,000 loads carried over more than 2.5 million miles. The Mountain View, Calif.-based company said it will use the new truck for its own driverless operations on roadways between Dallas and Houston this year.

The post Ansys, NVIDIA team up to test autonomous vehicle sensors appeared first on The Robot Report.

]]>
https://www.therobotreport.com/ansys-nvidia-team-up-to-test-autonomous-vehicle-sensors/feed/ 0
Top 10 robots seen at CES 2024 https://www.therobotreport.com/top-10-robots-seen-at-ces-2024/ https://www.therobotreport.com/top-10-robots-seen-at-ces-2024/#comments Sat, 13 Jan 2024 05:24:16 +0000 https://www.therobotreport.com/?p=577461 A quick look at some of the most noteworthy robots at the 2024 CES show in Las Vegas.

The post Top 10 robots seen at CES 2024 appeared first on The Robot Report.

]]>
collage of images of robot from CES 2024.

LAS VEGAS — CES 2024 featured a wide range of emerging technologies, from fitness devices and videogames to autonomous vehicles. But robots always have a significant presence in the sprawling exhibit halls.

The Robot Report visited numerous booths in the Eureka Park section of CES, as well as in the other focused sections of the event. Here are some highlights from this week’s event:

Mobinn climbs stairway toward success

Mobinn is a Korean startup focused on last-mile autonomous delivery vehicles. While the concept of last-mile delivery isn’t new, Mobinn demonstrated an innovative wheeled robot that can climb stairs.

The robot is capable of going up and down stairs through the implementation of compliant wheels. A self-leveling box on top of the robot keeps the cargo level, so your drinks and food don’t spill out of their containers.

Glidance guides people with vision impairments

a hero image of the Glidance device.

The latest prototype of the Glidance device is fully functional for public demos and includes a camera in the handle and radar sensors in the base. | Credit: The Robot Report

RoboBusiness PitchFire 2023 winner Glidance also won a CES 2024 Innovation Award and showed the latest functional prototype of its Glide device in its booth. The robotic Glide guides sight-impaired individuals similar to a guide dog. 

The startup is designing Glide to be affordable and easy to learn how to use when it starts shipping later this year. I tried Glide firsthand (while closing my eyes). The experience was incredible, and I can only imagine how promising this technology would be for an individual with sight loss.

The team at Glidance mentioned to me that celebrity Stevie Wonder came to the Glidance booth for a demo of the product during CES.

Unitree H1 humanoid steals the show

There were two full-size humanoid robots at CES 2024.

Kepler Robotics had a stationary model of the new Kepler Forerunner K1 in its Eureka Park booth. The robot includes 14 linear-axis motors and 14 rotary-axis motors. Unfortunately, the company was unable to give live demos of the Forerunner.

closeup hero image of the unitree h1 robot

The Unitree H1 humanoid robot uses sensors in its “head” to perceive the world around the robot as it navigates and avoids obstacles. | Credit: The Robot Report

The internet influencer darling of CES has to be the Unitree H1 humanoid, and the company was giving nearly continuous live demos of the H1 at its booth.

Kudos to the Unitree marketing team for its now-infamous “kick the robot” videos that have been shared on social media over the past six months. In the videos, H1 appears to be a solid humanoid platform with respectable balance and agility.

However, as a longtime robotics industry insider and experienced robotics applications engineer, I thought the Unitree H1 product demos at CES 2024 were cringe-worthy, as the Unitree demo team walked the H1 robot into crowds of “internet tech influencers” with their cameras ablaze.

The 150 lb. (68 kg) robot danced with the public inches away. A single tripping incident would have sent the robot tumbling into an innocent bystander and made instant headlines. It would have been a public relations disaster and a setback for the industry.

However, there’s no denying that the H1 was a crowd favorite at CES 2024, and the company and its robot received a lot of news media attention. 

Hyundai displays future mobility tech at CES 2024

[See image gallery at www.therobotreport.com] Hyundai got my vote as one of the industry-leading mobility and robotics leaders at CES 2024. It is the parent company of Boston Dynamics, but at CES 2024, the Spot and Stretch robots played minor roles in Hyundai’s story.

The company had multiple large-scale booths showing autonomy concepts for the future, including autonomous mobility for both humans and freight, as well as a look at the future of autonomous construction vehicles. Unfortunately, I didn’t get to witness either of the live mobility demonstrations, but the Hyundai Construction Xite concept tractor was an impressive incarnation of autonomous construction designs.

hero image of the hyundai construction xite prototype autonomous tractor.

Hyundai presented a concept for the future of autonomous construction equipment with the display of the Construction Xite tractor (Editors note: for scale, the bucket arm is over 10 ft tall). | Credit: The Robot Report

AV24 rolls into the showroom

The Indy Autonomous Challenge (IAC) had an impressive booth in the automotive hall of CES 2024, surrounded by well-known brand names. On display was a fully functional version of the newest AV24 autonomous racecar, showing off the integration of an entirely new autonomy stack in the vehicle.

The IAC has partnered with many of the leading automotive technology companies to embed the latest lidar, radar, vision, and GPS sensors within the vehicle. 

dSpace announced an extended partnership with IAC that will deliver digital twins of each university team’s vehicle along with digital twins of each of the race tracks. In turn, these will enable the teams to train the AI drivers completely in simulation and then port the AI models and drive code directly the physical race cars.

In addition, some sanctioned sim races are possible later this year, said the IAC organizers.

Embodied AI displays updated Moxie

The latest generation of Moxie by Embodied AI was on display in Amazon’s booth at CES. Embodied recently announced new tutoring functionality with the latest software release for Moxie, and it demonstrated the software at the expo.

Amazon had a separate expo suite that featured all of the physical Amazon consumer and smart home products (Amazon Astro was noticeably absent from the display). Moxie entertained the gathered crowds as it demonstrated its interactivity.

Fingervision measures gripper force

My “Unknown Discovery” award of CES 2024 goes to a young Japanese startup called Fingervision. This was a serendipitous discovery of an innovation that uses tiny cameras built into the gripper fingers of an industrial robot,

They provide feedback on the grip force and “slippage” of an item held with the gripper. This is accomplished by imaging the area where the fingers touch an object through an opaque surface. Thus the origin of the company name.

The company has deployed its first grippers into an application where robots are picking up fried chicken nuggets and packaging them.

 

Honorable mentions from CES 2024

Gatik keeps on trucking — autonomously

Gatik showed the third generation of its on-road autonomous truck. The company has made its mark on autonomous logistics through the deployment of driving algorithms that plans paths so that the vehicle only makes right-hand turns, avoiding more complex and dangerous left-hand turns. 

Gatik first demonstrated fully driverless, freight-only commercial deliveries on the middle mile with Walmart in 2021. Shortly after, it executed the first fully driverless deployment for Canada’s largest retailer, Loblaw.

The company also announced a partnership with Goodyear tires to develop “Smart Tires” that can provide real-time feedback to the autonomous driver with data about the condition of the tires to help maintain traction and control.

Bobcat Rogue X2 gets ready to move the earth

At CES 2024, Bobcat showed off an autonomous concept prototype, the Bobcat Rogue X2, at CES 2024. The all-electric, autonomous robot is designed for handling material movement and ground-altering operations at construction, mining, and agriculture sites.

The design prototype of the Rogue X2 at CES had wheels rather than tracks, but manually driven Bobcats can be equipped with tracks, so a production version of the Rogue could have similar configurations.

Ottonomy IO partners with Harbor Lockers

Through a new partnership with Harbor Lockers, the latest generation of Ottobot can now be configured with a payload of Harbor Lockers. This includes the Harbor Locker physical locker infrastructure, as well as the Harbor Locker application interface.

This is the first time that Ottonomy is partnering with a third-party vendor to extend the autonomous last-mile delivery solution. 

Lawn-mowing robots arrive in North America

CES is one of the world’s biggest consumer electronics shows. While The Robot Report doesn’t typically cover consumer robotics, it is notable that lawn-mowing robotics were ubiquitous at CES this year, with a dozen vendors showing their autonomous systems.

The European market for consumer lawnmowers is already mature, but the North American market is in the early stages of adoption. Without testing all of the different lawnmowing robots, it’s difficult to determine the market leaders, but the two most promising solutions that I saw at the show included the new Yarbo Lawn Mower and the latest generation of Navimow from Segway Robotics.

[See image gallery at www.therobotreport.com]

The post Top 10 robots seen at CES 2024 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/top-10-robots-seen-at-ces-2024/feed/ 3