Tuesday, July 27, 2021

Robots to acquire advance sense of touch

 AiFoam is an elastic substance that gives robots a sense of touch.

aifoam
image/ news.yahoo


The National University of Singapore has invented a type of "smart foam" that recognizes items without having to contact them.

AiFoam, a "smart foam" invented by a research team at the National University of Singapore (NUS), mimics the human sense of touch. Robots might be programmed to recognize adjacent items without having to touch them in this way.

AiFoam is an elastic polymer that can expand up to 230 percent without breaking, according to its creators. The substance was created by combining fluoropolymer and a surface tension-reducing surfactant. The "foam" manages to break into parts and reunite into a single piece because to these features.

Without breaking, AiFoam can stretch up to 230 percent. Without breaking, AiFoam can stretch up to 230 percent.

The university revealed the invention in May of this year. However, the study's findings were previously published (in November 2020) in the journal Nature Communication.

The researchers wanted to "show that it is possible to reproduce the human sense of touch in a robot" with this device. "Our research offers up a new paradigm in human-machine interaction for future applications," they claim.

Microscopic particles have been introduced into the foam in order for it to detect the presence of an object (much like a human's finger). According to the researchers, this allows AiFoam to sense the presence of a human by detecting the presence of human fingers millimeters away without touching them.
 

The nerve endings in the skin can be mimicked to detect human presence without touching. AiFoam conceals little cylinder-shaped electrodes beneath the foam that is apparent to the naked eye.

The electrical characteristics of the particles alter as the finger approaches. This change is detected by the electrodes, which are then interpreted by a computer.

The robot can determine the direction of the applied force or the quantity of force by using this idea. AiFoam also helps robots to get closer to human interactions, with a robot being able to detect when an object in contact is slipping.

This material could be used in prostheses or industrial equipment that requires high-precision machinery in the future.

Curated by Gerluxe

Toyota's robot butler - to assist the elderly with housework

 Toyota's latest robot butler will assist you with daily tasks.

 

 Image: theverge

The machine, according to the Japanese company, has a technology that allows it to sense space in 3D, making item recognition easier.

The robot is intended to assist the elderly with housework. The robot is intended to assist the elderly with housework. Toyota Research Institute is a non-profit organization that does

In the world of robotics, humans have come a long way. Small technologies that turn our homes into smart homes have become almost vital for many people in recent years.

Toyota aims to collaborate with the growing amount of technology dedicated to making domestic duties easier. The Japanese company recently released a video in which they demonstrate their new robot accomplishing activities that previously appeared to be too difficult for a machine to complete.

Cleaning tables, picking up objects, filming itself, maneuvering through various areas, and so forth. This robot can carry out a variety of duties while navigating around the house and avoiding obstacles. The home appliance, according to Engadget, can even detect items from their reflections.

Most robots are programmed to react to the things and geometry in front of them, according to the specialized automotive maker, but they are unable to properly differentiate space.

As a result, Toyota intended to use a different technique so that its robots could "perceive the scene's 3D geometry while also recognizing objects and surfaces."

But, what do they hope to achieve by making this change? As seen in the video, the robot can purposely hold transparent glasses when cleaning them, thanks to this innovative technology created by Toyota.

According to Max Bajracharya, vice president of robotics at Toyota Research Institute, the goal of this initiative is to assist everyone, but notably the elderly. According to Bajracharya, "because of the diversity and complexity of our houses, where simple activities can become enormous issues," this technology "poses specific obstacles."

LG Electronics new delivery robot

 

lg robot
LG Electronics

LG Electronics has unveiled its new delivery robot, which it hopes will take the rapid delivery market by storm.

 The South Korean firm demonstrated its brainchild at the International Robot Society Conference, claiming that it can move both indoors and out. The delivery robot is equipped with four wheels. The delivery robot is equipped with four wheels. LG

 LG Electronics exited the cell phone market in April of this year and has since focused on the development of other technical products. During the International Robot Society Conference on July 13, the South Korean business unveiled its new delivery robot.

 LG Electronics hopes to "lead the fast delivery robot market" with this gadget. This area is expected to grow rapidly, so these types of robots will become increasingly in demand.

 According to the business, the robot can travel freely both indoors and outdoors and can avoid impediments and go forward even when the ground is uneven. The machine moves for this duty thanks to four wheels that change in size depending on the surface they are treading on.

 Until now, the brand's delivery robots had only been able to function indoors. However, they had never been able to transport them outside. Now, with this statement, they guarantee a robot that can move freely in both environments.

 LG Electronics has stated that after undergoing several tests, it expects to be ready to test its creation by the end of 2021. The company will be able to expand its delivery operations as a result of this.

 LG Boston Robotics Lab was founded in Boston early last year by the multinational. They've been working on next-generation robotic technology with motion intelligence alongside Kim Sang-bae, a professor of mechanical engineering at the Massachusetts Institute of Technology (MIT).

 

Friday, July 23, 2021

BeachBot Robot to clean beaches of cigarette butts

Beaches will be free of cigarette butts thanks to a robot.

beachbot
Image: projectbb

Nowadays, going to the beach is common. A few days of disconnecting along the coastlines of our country or elsewhere. However, there is an evil that haunts the beaches: bad mannered tourists litter the beach with innumerable items, including over 4500 billion cigarette butts each year. BeachBot's mission is to clean up beaches all over the world.

We previously told you about Trove, a Microsoft Garage program that compensated us for our images. Who'd have guessed that such a project could aid BeachBot in its noble mission?

BeachBot is a robot made by TechTics, a consulting firm based in The Hague, to collect up abandoned cigarette butts on the beach.

According to a 2019 study by Brazilian scientists, 4.5 trillion cigarette butts wind up in the environment each year, where the fibrous particles, which can take up to 14 years to degrade, have become "the most common form of personal item detected on beaches." They progressively poison marine turtles, birds, fish, snails, and other species along the beaches.

According to a February research by US government experts, when water comes into contact with discarded cigarette butts, the filters release more than 30 compounds that are "extremely harmful" to aquatic animals and cause "a serious hazardous waste problem." In humans, several of these compounds have been related to cancer, asthma, obesity, autism, and a lower IQ.

Beachbot is a beach-cleaning robot designed by Edwin Bos and Martijn Lukaart that can detect cigarette butts, pick them out, and dump them into a safe receptacle.

To educate the beach robot (and, specifically, the AI system) to discover cigarette butts, TechTics must show it hundreds of photographs of cigarette butts, all displayed in varied conditions, as partially hidden, so that it can recognize and remember them.

Bos and his team used Microsoft Trove to assist them collect those photos. An app that uses a transparent data marketplace to connect AI engineers with photographers. Trove facilitates a direct photo exchange for fair market value. People can submit their images in this scenario, and TechTics compensates contributors 25 cents per accepted image.

"Like a child learning to recognize an object for the first time," Christian Liensberger, senior program manager at Trove, a Microsoft Garage project, explains how the system learns to see photos.

Through Trove, TechTics is attempting to collect 2,000 photographs. It has gotten roughly 200 useful images thus far. If Trove was available worldwide and Rewards points were used as rewards, this would be much easier.

Curated By Gerluxe

What Is robotics

 Definition of Robotics?

What is Robotics
 

 Why Robotics Is Important

You're probably wondering what robotics is and why it's important in today's world. Robotics is the field that deals with the design, manufacture, and operation of robots. It's an industry with a wide range of consumer applications. A robot, in broad terms and without going into too much detail, can be thought of as a computer that can move. We explain why studying industrial robotics is important.

They've been utilized in industry for a long time, but they're now considered a critical component of Industry 4.0 deployment. On the one hand, for the automation of repetitive operations as well as the execution of other difficult, risky, or demanding human duties.

What is the purpose of robotics?

Robots have several advantages, like the ability to operate 24 hours a day, being stronger and more precise than humans, and so becoming the protagonists in many businesses today.

When we think about robots, we think of articulated arms in factories and, if we're over 30, we think of Johnny 5 from the film "Short Circuit." With the passage of time and the advancement of this subject, however, this technology is now used in much more disciplines than you would think. Are you familiar with all of its applications?

Robotics' Applications

Robot vacuum cleaners such as Conga and Roomba are popular in the home.

Medicine: da Vinci-style assisted surgery.

Waymo, a self-driving car company, is one example of transportation.

Military aviation: self-flying planes

Automated logistics centers (Amazon already uses it) or manufacturing with articulated arms are two options for businesses (UR, Kuka).

Software solutions: Some financial organizations, such as BBVA's AI Factory, are already experimenting with artificial intelligence-based contract evaluation systems.

The field of robotics has gone a long way in recent years. One of the advancements has been the increase and gathering of data, which helps programmers to work with more information during the programming process. Another example is the use of sensors and other linked devices to enable more complicated and functioning robots to "know" their surroundings. Artificial intelligence has enabled robots to perform tasks autonomously and improve production processes, so how could it be otherwise?

What role does robotics play in society?

According to the latest OECD study, 21.7 percent of occupations in Spain are at high danger of automation, and the World Economic Forum estimates that the industry will generate 58 million new jobs. More than half of companies are aware that their employees will require training and retraining at least once every 12 months, according to the same report "The Future of Jobs Report."

As a result, even if a company is full with robots, someone will always be in charge of controlling, inspecting, maintaining, programming, and overseeing their behaviors. Furthermore, humans have an improvisational potential that machines lack, thus there will always be a demand for qualified personnel. That is why expertise in robotics is so important.

Finally, and despite the fact that it may sound premature and dystopian, people and machines are doomed to comprehend and operate together, whether we like it or not.

Curated By Gerluxe

SOPHIA Humanoid Robot

 

SOPHIA humanoid robot

SOPHIA, Humanoid Robot

Hanson Robotic's Sophia is the most important creation of David Hanson, a robotics engineer who in 2013 decided to establish his own company: Hanson Robotics. 

Humanoid Robot Sophia, created in 2016, is a robot, but not only from a mechanical point of view, but also in the sense of having a high-level artificial intelligence brain that allows it to process unstructured language, that is, beyond responding or interacting with something already programmed, Sophia has the ability to learn new answers, increase its knowledge every time it interact with a human being, which is a basic example of Artificial Intelligence and machine  

Artificial Intelligence and machine learning.
 

Sophia is capable of holding conversations, demonstrating through its face similar gestures to those of people. gestures similar to those of people. Sophia's appearance is humanoid, i.e., it has a face, arms, hands and that is, she has a face, arms, hands and, most attractive, are her gestures, which add a differentiating element to the which add a differentiating element to the interaction.   

This translates into a variability of its communication where it shows facial, superficial emotions, while remaining what it is, a robot of advanced conditions and very human appearance.

Sophia is an obvious element of these developments on a global scale. In Medellin, Sophia provides a concept of certainty and curiosity at the same time, since it becomes a motivating factor to encourage young people, companies in the sector and universities themselves, to learn, work collaboratively and propose projects that solve human needs through artificial intelligence.  

Medellin, as an innovative city and with the great talent it has in schools, universities and companies, can open and consolidate a broad and competitive labor spectrum. Curiosity feeds science, and with it the participation of great projects of the country that arise from the city from the visit and curiosity that Sophia awakens.   

Some curiosities about Sophia:  

- She is the first humanoid with advanced artificial intelligence that has been created on the planet.

- Sophia was developed in the image and likeness of Audrey Hepburn, Hollywood actress and model.  

- Sophia can display 62 human facial expressions.  

- With her visit to the Universidad Pontificia Bolivariana and MedellĂ­n, it will be the first time she will be in South America and the first time in South America and, specifically, the first time in Colombia. Colombia.  

- In 2017, Saudi Arabia granted her Saudi citizenship. First robot to have a nationality.

Curated By Gerluxe

Alias Robotics leads the fight against cyber-attacks on industrial robots

  Image: n-economia



Alias Robotics will present a research paper at the Black Hat 2021 international conference in which they discovered more than 100 security flaws in industrial robots.

Alias Robotics, a renowned Spanish company in the field of robot cybersecurity, and Trend Micro, a global cybersecurity leader specializing in fighting cybercrime, have formed a partnership to combat cybercrime in robotics. The first results of the two groups' partnership will be presented at the upcoming 'Black Hat 2021' conference in the United States, where they will demonstrate that collaborative robots are not safe.

Alias Robotics is at the forefront of the fight against industrial robot hacking.

Researchers from Alias Robotics and Trend Micro will discuss the findings of more than three years of research inspecting industrial robots at 'Black Hat 2021,' the world's most prestigious international IT security conference, which will take place in Las Vegas, USA, from July 31 to August 5.

They will share the results of their first combined study on robotic security challenges at the event, which also included researchers from the Austrian university Alpen-Adria-Universität Klagenfurt. The participants have written a detailed report on new results in the field of robotics threat and vulnerability study, which warns of the gadgets' grave dangers.

The paper, which will be presented at Black Hat 2021, calls for a new offensive and complementary approach methodology for protecting robotic arms in a practical and timely manner. The researchers assessed the present status of robotics cybersecurity and outlined the challenges to protecting robotic systems based on a decade of robotics expertise.

"Complexity makes robotics security a challenge," says Victor Mayoral-Vilches, a robotics security researcher at Alias Robotics. "The intrinsic complexity of robotic systems leads to many potential attack vectors," he believes, "attacks that manufacturers fail to neutralize in fair time."

Automation firms are listed in this directory.

According to Federico Maggi, a senior researcher at Trend Micro, "Reverse engineering is to software security what robot disassembly is to industrial security. Both of these abilities are essential for the next generation of security professionals "..

Most industrial robot manufacturers today, like Ford in the 1920s, continue to use various planned obsolescence practices and organize dealers (often called distributors) or approved system integrators into private networks, providing repair parts only to certified companies in an attempt to discourage repairs and evade competition, according to experts from Alias Robotics and Trend Micro.

More than a hundred flaws

More than 100 vulnerabilities were discovered in the study, which affected numerous collaborative industrial robot manufacturers. The findings suggest that robot teardowns can considerably improve the quality, safety, and security of these devices, which can benefit the robotics sector and supply chain. The data also point to the practice of deliberate obsolescence. The authors propose for a 'right to repair' in robots and encourage end users to communicate their safety concerns to their supplier chains and OEMs in this area.

Alias Robotics and Trend Micro's engagement in the sphere of robot cybersecurity also involves project collaboration and intelligence exchange. In reality, both companies will submit joint reports and work closely with E-Crime Task Forces such as the Spanish Police, the Basque Cybersecurity Center, and the United States Government Security Forces, among others.

"Both manufacturers and security researchers must be responsible when it comes to security. According to our findings, several collaborative robot manufacturers have been disregarding cybersecurity for a long time. Worse, these companies push legal responsibilities down the supply chain, including to numerous distributors and system integrators in Spain.

Manufacturers must accept responsibility, respond quickly, and invest in security. A robot that lacks cybersecurity is insecure. This is especially concerning with collaborative robots that work alongside us. That is why it will be presented at Black Hat "Victor Mayoral-Vilches agrees.

Curated By Gerluxe

Saturday, July 17, 2021

How does artificial intelligence work

How does artificial intelligence work

artificial intelligence

 AI is made up of algorithms that act on programming principles, as well as Machine Learning (ML) and different ML techniques including Deep Learning (DL).
Artificial Intelligence (AI) (ML) 

It is a branch of Artificial Intelligence that is one of the most well-known, and it is responsible for inventing strategies that allow algorithms to learn and improve over time. It entails a vast quantity of code and complex mathematical formulas in order for machines to solve a problem.

This branch of AI is one of the most developed for commercial or corporate reasons today, as it is utilized to quickly process enormous volumes of data and store it in a human-readable format.

Data retrieved from industrial facilities, where connected elements send a constant stream of data on machine status, production, functioning, temperature, and so on to a central core, is a good example of this. To accomplish continuous improvement and suitable decision making, a vast amount of data created from the manufacturing process must be examined; yet, the volume of this data requires humans to spend a significant amount of time (days) on analysis and traceability.

This is where Machine Learning comes in, allowing data to be examined as it is incorporated into the manufacturing process, allowing for faster and more accurate detection of trends or anomalies in the operation. Warnings or alarms for decision-making can be triggered in this fashion.

However, machine learning is a large category. Deep Learning was created as a result of the development of these artificial intelligence nodes (DL).

Learning from the Ground Up (DL)

This is a subset of Machine Learning (ML) that refers to a group of algorithms (or neural networks) designed for automatic machine learning and non-linear reasoning.

The algorithms are grouped into artificial neural networks in this technique, which promise to operate like the human neural networks found in the brain. It's a method for deep learning that doesn't require any special code.

Deep Learning is required to do considerably more complicated functions, such as analyzing a large number of variables at the same time. Deep Learning, for example, is used to contextualize the data collected by autonomous car sensors, such as the distance between objects, the pace at which they are traveling, and predictions based on the movement they are making. Among other things, this data is utilized to determine how and when to change lanes.

We're still in the early stages of DL's development, so we can't expect it to reach its full potential very soon. We're seeing it utilized more and more in business to transform data into far more detailed and scalable collections.

In the corporate world, artificial intelligence (AI) is becoming more prevalent.

Automation, language processing, and effective data analysis are just a few of the commercial and production areas where AI is already in use. Companies are streamlining their manufacturing processes, operations, and internal efficiency across the board.

AI is based on a set of computer programming rules that let a machine to act and solve issues in the same way that a human would.

Companies are interested in incorporating AI technology into their processes because of the benefits it provides.

When we consider how artificial intelligence behaves at whatever level, we discover that all AI projects are data projects. We'll use an iceberg as an example to illustrate this point. We used this analogy because we feel that an AI project may be broken down into three stages: 

1) Gather relevant data for the project, 

2) train the algorithm(s), and 

3) test the algorithms that have been trained.

The goal of this comparison is to explain what working with artificial intelligence entails. Artificial intelligence techniques like machine learning, deep learning, and natural language processing aren't magic, and they rely heavily, if not entirely, on the significant data preparation required. We estimate that data preparation takes up more than half of the effort in a successful AI project. 

This assumes you haven't already cleansed and produced a sufficient data collection, which is extremely likely if you're working with data from your business for the first time. Despite this significant effort, data preparation is crucial activity that goes mostly unnoticed, similar to the majority of an iceberg that lies beneath the sea. As a result, the intricacy of this aspect of the process is not always appreciated, as it is not always reflected in the visible outputs of a project, much as only a small fraction of the iceberg protrudes from the surface compared to the rest of the iceberg.

Have you ever used Alexa, Siri, or Google Home as a voice assistant? Let's pretend we're having a conversation with Google Home and look at what happens during each of these stages.

Phase 1: Gather the necessary information.

Google Home recognizes voice requests and responds appropriately, such as answering a question, setting a timer, or controlling a connected device. To achieve these types of results, the first phase, data preparation, must include actions such as:

obtaining millions of voice recordings from a variety of sources;

eliminating background noise and other unwanted sounds from recordings;

converting the recordings to a single audio format (for example, mp3);

identify the recordings accurately;

other actions that are linked

Finally, the data must be divided into at least two groups: one for phase 2 of the algorithm modeling (training data), and another for step 3 of testing the previously trained algorithm (test data).

For a company like Google, we imagine that all of the duties associated with this phase were completed over several years by a competent team of engineers and developers who earned their pay by tackling this problem through multiple iterations and product upgrades. Furthermore, data is the business of huge technological businesses, which is why they have access to massive volumes of relevant data to organize into robust training and test data sets for successful product development. 

Despite this, as consumers, we have all encountered the flaws in these products at some point, such as when a voice assistant misread one or more words, or when a smart scanner failed to recognize an image.

When we compare these limits to the resources required vs the data available to the average AI researcher, we can see the scale of the data preparation step and its significance in the overall process of creating something useful.

Phase 2: Algorithm training

We'll start by deciding the algorithms we'll train in this phase. Assume each of these plasticine pieces represents a different sort of algorithm: the red block represents linear regression, the orange block represents k-means, the purple block represents neural networks, and the turquoise block represents support vector machines.

What takes place during the training? You've probably heard the adage that you should find the algorithm that best fits your needs. The plasticine comparison will assist you understand this metaphor at this point. Each algorithm molds itself to the training data by discovering patterns in the data during the training process, thus the algorithms can look like this at the end of the training phase:

Phase 3: Putting the trained algorithms to the test

Step 3, often known as the testing phase, involves providing test data to each of the trained models in order to determine which one makes the greatest prediction. To continue with the plasticine comparison, if a successful outcome is defined as "the ability to roll," we may deduce that the red and purple figures in the image above are the only two that can roll. The rounder, purple figure, on the other hand, will clearly be able to roll more efficiently. A successful result in the case of Google Home, for example, would be providing an adequate response to a voice command.

If the testing results needed to be improved, we could take one of two approaches: 

1) alter the algorithm, or 

2) add fresh and relevant data to our project. We remind you that this is an iterative process, although the sequence generally follows the Iceberg Model's three phases.

Finally, when creating and implementing an Artificial Intelligence project, there are other other factors to consider. We believe that this simplification, which we've dubbed the Iceberg Model, will be useful to you in framing the overall approach to your next project, as well as in articulating the work that goes on behind the scenes, or, to return to our analogy, undersea work. 

Curated By Gerluxe

What is artificial intelligence and how does it work?

 What exactly is artificial intelligence, how and why is it used?

What is artificial Intelligence

 Artificial Intelligence is an idea that has been around for a long time. In fact, John McCarthy coined the phrase Artificial Intelligence in 1950, and Alan Turing wrote about it in a paper titled "Computing Machinery and Intelligence" the same year.

Since then, the field of computer science has progressed significantly.

AI, according to professor Patrick H. Winston of the Massachusetts Institute of Technology, is "constraint-enabled algorithms exposed by representations that support models driven by loops combining thought, perception, and action."

Others, such as DataRobot CEO Jeremy Achin, define Artificial Intelligence as a computational system that allows machines to execute tasks that would otherwise need human intelligence.

Margaret Rose, the editor-in-chief of Tech Target's technology encyclopedia, defines it as a system that simulates several human functions including learning, reasoning, and self-correction.

As can be seen, the three definitions of AI all apply to thinking machines or computer systems. They imitate human intelligence in order to execute tasks that only humans are capable of.

Other publications, on the other hand, define AI as a computer system that solves complicated issues beyond the human brain's capabilities.

In this sense, AI uses the computing capacity of computers to tackle complicated problems that the human mind is incapable of solving.

"Since everything we love about our civilization is a product of our intellect, magnifying our human intelligence with artificial intelligence has the potential to help civilization emerge as never before," says Max Tegmark, head of the Future Life Institute.

In response to this problem, Google Deep Mind and Oxford University collaborated on research that found AI is capable of interpreting damaged and illegible Ancient Greek writings. While historians and epigraphers have a 57.3 percent error rate, the algorithm in charge of this achievement has a 30.1 percent error rate.

These examples demonstrate how AI can tackle complicated issues in ways that humans cannot. But, exactly, how does AI work?

Artificial intelligence is a group of technologies aimed at simulating human intelligence's unique qualities and capabilities.

The most recent technology advancements cause us to consider where the world is headed. In truth, the technical-scientific field has been bringing about a major worldwide change for some time now: artificial intelligence (AI).

Artificial intelligence is the moniker given to a succession of technologies with qualities or powers that were previously limited to the human intellect, notwithstanding the lack of a precise description. When a computer duplicates cognitive functions that humans identify with other human minds, such as learning or problem solving, the phrase "cognitive mimicry" is used.

What is artificial intelligence and how does it work? 

Origins and history

At the Dartmouth conference in 1956, scientists Allen Newell, Herbert Simon, Marvin Minsky, Arthur Samuel, and John McCarthy met for the first time, laying the groundwork for the discipline of artificial intelligence. It would be simple to give machines the ability to think, they agreed.

Returning to the Greeks, the fundamental notions regarding artificial intelligence take us to Aristotle, who was the first to describe a system of rules outlining a part of the mind's workings in order to arrive at reasoned conclusions. Ctesibius of Alexandria, a few centuries later, built the first rationally self-controlling machine, but without reasoning.

Artificial intelligence reached its pinnacle in the late 1950s and early 1960s, when machines outperformed many humans at checkers, 'learned' English, and solved algebraic and logical problems.

Later, between 1968 and 1970, Terry Winograd, a professor of computer science at Stanford University, built the SHRDLU system, which allowed people to question and issue commands to a robot that walked around in a block environment.

The multinational IBM produced a supercomputer called Watson in the new century, after significant technological improvements, and it won the Jeopardy game (a television knowledge challenge) three times over two of its top champions.

Artificial intelligence has changed not only the economic sector, but also the social sphere, with applications ranging from early cancer detection to Amazon deforestation prevention.

Artificial intelligence classifications

In their book 'Artificial Intelligence: A Modern Approach,' Stuart Russell and Peter Norvig distinguish four types of artificial intelligence.

- Human-like systems: these are systems that attempt to mimic human thinking processes such as decision-making, problem-solving, and learning.

- Human-like systems: these attempt to act like humans. In other words, they imitate human conduct. Robotics is an example of this system.

- Rational-thinking systems: they strive to emulate human beings' rational, logical thinking; for example, the study of the computations that allow us to perceive, reason, and act.

- Rational systems: this system seeks to imitate human behavior in a rational manner. It has to do with intelligent artifact behaviors.

What is the difference between traditional artificial intelligence and computational artificial intelligence?

Symbolic-deductive AI is a type of artificial intelligence that is based on the formal and statistical analysis of human behavior in the face of various problems. It aids decision-making while tackling specific tangible challenges and necessitates high performance.

Allows for complicated decision-making and the proposal of a solution to a problem. This intelligence also has autonomy and the ability to self-regulate and control itself in order to improve.

Meanwhile, subsymbolic-inductive AI, often known as computational artificial intelligence, entails development or interactive learning. This knowledge is based on empirical evidence.

Algorithms are used to develop artificial intelligence. They are mathematical learning skills, and the data required to train these algorithms is observable data that is publicly available or data supplied by some companies, the same that repeats the learning process.

What is the purpose of artificial intelligence? Applications in the field and in the real world

Artificial intelligence is employed in a variety of industries, including robots, language comprehension and translation, word learning, and so forth.

The following are the primary and most notable sectors where we may discover a well-known evolution of artificial intelligence:

- Information Technology

- Funding

- Hospitals and pharmaceuticals

- Heavy manufacturing

- Customer support

- Access to transportation

- Playing games

What are the potential dangers of artificial intelligence?

While artificial intelligence has numerous advantages in various areas of life, some scientists feel it may introduce new problems.

The financial market is the most vulnerable, as computers' ability to handle massive amounts of data can give those in charge of it influence and allow them to dominate banking on a global scale.

Another issue is the lack of worldwide regulation.

The loss of jobs, on the other hand, is likely the most concerning risk and one that might cause a slew of issues. According to a research published in China in 2015, artificial intelligence will make nearly half of all present vocations obsolete by 2025 if technology continues to disrupt businesses at its current rate.

As a result, scientists have begun to conceptualize the boundaries of artificial intelligence in each of its applications and how they should be handled to ensure that human safety is preserved. 

Curated By Gerluxe

Thursday, July 15, 2021

Artificial intelligence to improve robot programming

 Festo uses computer science to enhance robot programming.

 

robot programming
 

Production, warehouse, and shipment - these are the places where items are made, stored, sorted, and packed, likewise as where picking takes place. this implies that multiple individual items are disassembled and reassembled from storage units like boxes or cartons. Festo and researchers from the Karlsruhe Institute of Technology (KIT) are working with Canadian partners on the FLAIROP (Federated Learning for Robot Picking) initiative to create picking robots smarter using distributed AI technologies. To do so, they're looking into ways to mix training data from numerous stations, plants, or maybe companies without asking users to divulge critical company information.


“We're looking into how we are able to use the foremost versatile training data from multiple locations to develop more robust and efficient solutions using computing algorithms than we could with data from only one robot,” says Jonathan Auberle of KIT's Institute of fabric Handling and Logistics (IFL). Items are gripped and transferred by autonomous robots at many picking stations during the operation. The robots are trained with a range of objects at various stations. Finally, students should be able to comprehend articles from different stations that they're unfamiliar. “We balance data diversity and data security in an industrial environment using the federated learning approach,” explains the exper

Algorithms with plenty of punch for Industry 4.0 and logistics.

Until far, federated learning has primarily been employed within the medical field for image analysis, where patient data security is of particular importance. As a result, there's no interchange of coaching data for the synthetic neural network, like photos or grasp points. Only bits of stored knowledge, like the neural network's local weights, which indicate how strongly one neuron is expounded to a different, are sent to a central server. The weights from all stations are gathered and optimized supported a range of variables. The modified version is then broadcast to local radio stations, and therefore the process is repeated. The goal is to make new, more powerful algorithms for the reliable application of AI in industry and Logistics 4.0 while adhering to data privacy regulations.

“In the FLAIROP current research, we're functioning on novel ways for robots to show from each other without having to share secure data or trade secrets. This has two significant advantages: we protect our clients' data and that we gain speed because robots can perform various activities faster. in step with Jan Seyler, Head of Advanced Develop. Analytics and Control at Festo SE & Co. KG, collaborative robots can, for instance, assist production workers with repetitive, heavy, and tedious jobs. During the project, two autonomous picking stations at the KIT Institute for Material Handling and Logistics (IFL) and two at the Festo SE company in Esslingen am Neckar are founded to show the robots.

Start-up Darwin Further collaborations include AI and also the University of Waterloo in Canada.

“DarwinAI is ecstatic to contribute our Explainable (XAI) platform to the FLAIROP project, and we're honored to collaborate with such prestigious Canadian and German academic institutions, additionally as our industrial partner, Festo. For this fascinating project, we expect that our XAI technology will enable high-value human-in-the-loop operations, which is a necessary aspect of our offering alongside our new approach to Federated Learning. With our basis in academic studies, we are excited by this partnership and also the industrial implications of our novel approach for a range of producing customers,” explains DarwinAI CEO Sheldon Fernandez.

“The University of Waterloo is thrilled to be collaborating with Karlsruhe Institute of Technology and Festo, a worldwide leader in industrial automation, to develop the following phase of reliable computing to industrial. we are able to enable AI solutions to assist factory workers in their daily production tasks to maximise efficiency, productivity, and safety by leveraging DarwinAI's Explainable AI (XAI) and Federated Learning,” says Dr. Alexander Wong, Co-Director of the Vision and Image Processing Research Group at the University of Waterloo and Chief Scientist at DarwinAI.

FLAIROP'S PROFILE

The FLAIROP project (Federated Learning for Robot Picking) could be a collaboration between Canadian and German companies. The German partners offer their knowledge in robotics, autonomous grasping through Deep Learning, and data security, while the Canadian project partners concentrate on seeing through Deep Learning, Explainable AI, and optimization.

Curated By Gerluxe



Heavy weight robots dancing like Professionals, and they can do it better than most people in the true 'Dirty Dancing' style.

Robot Dancing



Robotics, on the other hand, has advanced considerably over the last two decades, demonstrating that the day when robots and humans work side by side is closer than ever. All of this is because to businesses like Boston Dynamics, who have honed the machines' agility to previously unseen levels. And, as the corporation has been demonstrating in recent years, they no longer stumble on the stairs.

Robots perform choreography to the tune of Dirty Dancing.

 

 

The robots can now confidently jump and make all kinds of lovely pirouettes. They can also move over uneven terrain to assist in rescue operations, withstand practically any impact, and dance to any tune without missing a beat. The Boston Dynamics company put several of its most recognizable robots to dance just a few hours ago to celebrate the holidays. They haven't performed a simple choreography, either.

As though they were Frances Houseman and Johnny Castle, the characters of this video danced to the iconic Do You Love Me by The Contours. A song from the iconic Dirty Dancing, a song that defined an entire age. One of the humanoid robots, for starters, begins to jump and move his hips better than most people. They also effortlessly snap their fingers and replicate some of the choreography's most recognizable motions.

He is not, however, alone. The Boston Dynamics team shows us a second robot following in the footsteps of the humanoid robot as it dances to the tune. Both repeat each stride in tandem, as if they were a dancing partner, exhibiting excellent coordination. They aren't the only ones on the dance floor, though. Spot Mini, the company's robotic dog, is activated after a few seconds. A robot that assists doctors in dealing with COVID-19 and isn't afraid to come out on the dance floor when asked.

There's also a third form of robot that, despite its massive size, has the capacity to dance. In this way, the Boston Dynamics squad concludes its 2020 season and looks forward to a much better 2021.

Curated By Gerluxe


What is industrial robotics?

why industrial robotics is important technology in industrial automation?

  

industrial robot

Industrial robotics and automation are the cornerstones that have enabled the development of Industry 4.0 while also offering various benefits to production resource productivity and efficiency.

The various industrial automation models currently in use remove the subjective aspect of human decisions, resulting in narrower margins of error and more exact processes while relieving human labor from monotonous or risky duties.

Thousands of companies employ industrial robots to automate jobs, improve worker safety, and boost overall productivity while lowering waste and operating expenses. With the rising use of industrial robots in manufacturing environments, demand for a variety of various types of industrial robots to fit certain applications and industries has skyrocketed.

What are industrial robots and how do they work?

Industrial robots are mechanical devices that mimic human movements to some extent. They are utilized if a person is in danger, when more strength or precision is required than a human, or when continuous operation is required. The majority of robots are stationary, while some deliver goods and supplies throughout the workplace.

Industrial robotics are utilized to conduct operations with a high degree of precision and repeatability, resulting in higher-quality goods. Industrial robots' capacity to work continuously without stopping aids manufacturers in increasing output. Furthermore, robots can work in potentially hazardous and toxic situations, enhancing working conditions and safety on the factory floor. As a result of the numerous benefits of industrial robots, manufacturers are increasingly incorporating various types of industrial robots into their production lines in order to boost plant efficiency and profitability, i.e. technological maturity.

Industrial robots come in a variety of shapes and sizes.

Different types of automation can be used depending on the needs and demands of the industry sector and production model. In particular, we commonly refer to three types of automation:

The automation has been fixed.

This type of automated system may appear to be limited in that it recognizes that the robot is trained to complete one task after another. However, because a robot can execute repetitive tasks without losing accuracy, it is the most beneficial sort of industrial automation for mass and large-scale manufacturing and generates significant efficiency benefits.

Automation that can be programmed

Because it is based on a programmable system, this is an intermediate level of automation. That is, it is a sort of industrial robotics automation in which the software that houses a robot may be reprogrammed. A robot may rearrange itself and execute a variety of tasks thanks to this reprogramming. This sort of industrial robots, which has reprogrammable automatic actions, is highly beneficial in manufacturing environments where different versions or models must be covered.

Automation that is adaptable

This is the type of industrial robotics that is articulated in a succession of stations that are connected yet operate independently of one another. In this method, a central computer directs all automated systems to do duties in a coordinated and ordered manner. This sort of automation allows robots to communicate with one another and coordinate their operations through the exchange of information.

Industries that make use of industrial robots

The manufacturing sector

In many parts of manufacturing, industrial robotics is being employed to help enhance productivity and efficiency while lowering production costs. Many robots in manufacturing collaborate with workers to perform repetitive, monotonous, or intricate tasks under the guidance and control of the workers, just as they do in the healthcare industry. Precision is valued over speed with these machines, as is the ability to be reprogrammed for specific tasks of various sizes and complexities.
 

The use of robotic manufacturing technology is also growing safer. Robots can recognize and avoid humans in the workplace thanks to cameras, sensors, and automatic shutdown capabilities.

In the healthcare industry, industrial robots

Robotics advances have the potential to transform a wide range of health-care practices, including surgery, rehabilitation, therapy, patient companionship, and daily tasks. Robotic tools in health care aren't meant to take over a doctor's job; rather, they're meant to make life easier for them.

Public Safety and Industrial Robotics

Robotic technology is being used in a variety of ways in the military and public safety sectors. Unmanned drones are one of the most noticeable regions. Surveillance and battlefield support missions are possible with these equipment. Military drones can assess threat levels and provide real-time information to soldiers and first responders in war and conflict zones, hostage situations, and natural and man-made disasters.

Robotic gadgets are already gaining traction in a number of economic sectors. As robotic technologies become more affordable, they will soon be offered to customers in a variety of forms, with the potential to change our lives in a variety of ways.

Agriculture automation

The agriculture business has been aggressively seeking to integrate various sorts of robotic technology to aid enhance output while lowering overall expenses. Farmers have already been using GPS-guided tractors and combine harvesters. Autonomous systems that automate processes such as pruning, thinning, mowing, spraying, and weeding have recently seen a rise in experimental application. Sensor technology is also being utilized to combat crop pests and illnesses.
preparing food

The kitchen will soon be home to one of the most extravagant advances in robotic technology. In a home kitchen, automated, intelligent robots will be able to prepare and cook hundreds of meals. These "robotic chefs" will be controlled by a smartphone, and after the controller selects a recipe and organizes pre-packaged containers with the ingredients cut and ready, the robot will be able to rapidly and efficiently make the predetermined meal.

Industrial robotics in the future

In global production, Industry 4.0 will become increasingly essential. Manufacturers will incorporate robots into factory-wide networks of machines and systems as challenges such as system complexity and data incompatibility are solved. New service models based on real-time data collected by sensors connected to robots are already being developed and marketed by robot manufacturers.

Cloud robotics, in which data from one robot is compared with data from other robots in the same or different locations, is expected to increase rapidly, according to analysts. These connected robots can execute the same tasks thanks to the network. This will be utilized to optimize the robot's motion characteristics including speed, angle, and force. In the end, the introduction of big data in manufacturing may redraw the line between equipment manufacturers and manufacturers.

Curated By Gerluxe

Wednesday, July 14, 2021

Scientists are getting closer to solving the methane mystery on Mars.

rovermars
Image NASA

Scientists and non-scientists alike have been intrigued by reports of methane detections on Mars. Microbes that aid in the digestion of plants produce a substantial amount of methane on Earth. When livestock expel or belch gas into the air, the digesting process is complete.

While there are no livestock, goats, or sheep on Mars, the discovery of methane is significant because it could mean that bacteria have lived or now live there. Methane, on the other hand, could be produced by geologic processes involving the interaction of rocks, water, and heat without having anything to do with germs or biology.

Before scientists can pinpoint the origins of methane on Mars, they must first answer a vexing question: why do certain equipment detect the gas while others do not? For example, NASA's Curiosity rover has discovered methane just above the surface of Gale Crater on many occasions. However, the European Space Agency's (ESA) ExoMars mission's Trace Gas Orbiter, TGO, has not identified methane in the Martian atmosphere.

"The Trace Gas Orbiter was expected to indicate that there is a modest quantity of methane everywhere on Mars," Chris Webster, director of the Tunable Laser Spectrometer (TLS) on the SAM instrument, a chemistry laboratory aboard the Curiosity rover, said.

In Gale Crater, the TLS has averaged less than a half per billion by volume of methane. This is the same as a pinch of salt diluted in an Olympic-size pool. Disturbing peaks of up to 20 parts per billion by volume have been observed during these measurements.

"However, I was shocked when the European team said it had found no methane," said Webster, who works at NASA's Jet Propulsion Laboratory in Southern California.

The European orbiter was built to be the most accurate in the world in measuring methane and other gases. At the same time, Curiosity's TLS is so precise that it will be used to identify fires on the International Space Station before they spread and to monitor oxygen levels in astronaut suits. It's also approved for use in power plants, oil pipelines, and military aircraft, where pilots can check the amounts of oxygen and carbon dioxide in their helmets.

Nonetheless, Webster and the SAM team were taken aback by the European orbiter's findings and set out to investigate TLS readings on Mars right away.

Some analysts speculated that the rover was expelling gas on its own. Webster explained, "So we looked at relationships with rover orientation, dirt, rock crushing, wheel degradation, you name it." "I can't express how meticulously the team has scrutinized every last element to ensure that those metrics are accurate, which they are."

While the SAM team attempted to validate their methane detections, planetary scientist John E. Moores of York University in Toronto, a member of Curiosity's science team, issued an exciting prediction for 2019. "I took a very Canadian approach to this," Moores explained, "in that I addressed the question, 'What if Curiosity and TGO are correct?"

Moores and other Curiosity group members investigating wind patterns in Gale Crater theorized that the difference in methane measurements is due to the time of day they are obtained. TLS is only active at night, when other Curiosity instruments aren't working, because it requires a lot of power. Moores pointed out that because the Martian atmosphere is silent at night, methane escaping from the ground gathers near the surface, where Curiosity can detect it.

TGO, on the other hand, necessitates the use of sunlight to locate methane around 5 kilometers above the ground. "During the day, any atmosphere near a planet's surface goes through a cycle," Moores explained. Warm air rises and cold air falls when the Sun's heat stirs the atmosphere. As a result, methane trapped near the surface at night mixes with the rest of the atmosphere during the day, diluting it to undetectable levels. Moores explained, "I understood that no instrument, especially one in orbit, would see anything."

The Curiosity crew chose to put Moores' prediction to the test right away by taking the first high-precision diurnal measurements. TLS took three measurements of methane over the course of a Martian day, combining a nocturnal measurement with two daylight measurements. SAM took in Martian air for two hours at a time during each experiment, continually eliminating carbon dioxide, which makes up 95 percent of the planet's atmosphere. TLS was able to quantify a concentrated sample of methane by repeatedly running an infrared laser beam across it, one that was calibrated to employ an exact wavelength of light that is absorbed by the methane.

"Our two daytime observations validated John's prediction that methane should effectively decrease to zero throughout the day," said Paul Mahaffy, principal investigator for SAM, which is based at NASA's Goddard Space Flight Center in Greenbelt, Maryland. The TLS measurement taken at night falls well within the team's determined average. "So that's one approach to settle this huge gap," Mahaffy remarked.

While this study reveals that methane emissions on the surface of Gale Crater increase and decrease throughout the day, scientists have yet to solve the entire methane problem on Mars. Methane is a long-lasting chemical that will last for roughly 300 years on Mars before being destroyed by solar radiation. If methane is continually escaping from all similar craters, as scientists think given Gale's geological similarity, enough should have accumulated in the atmosphere for the TGO satellite to detect it. In fewer than 300 years, scientists believe that something is depleting the methane.

Investigations are ongoing to see if dust-induced low-level electrical discharges in the Martian atmosphere may demolish methane, or if abundant oxygen on the Martian surface can quickly destroy methane before this reaches the higher atmosphere.

Curated By Gerluxe

What Is A Robot?

 

asimo robot
Image Wikipedia

A robot is a mechanical or virtual artificial entity. In reality, this is frequently an electromechanical system that appears to have its own purpose based on its look or actions. Because of the independence it has achieved via its movements, its acts are worthy of a careful and in-depth examination in the field of science and technology. Although virtual software systems are frequently referred to as bots, the term robot can refer to both physical mechanisms and virtual software systems.

Although there is no universal agreement on which machines qualify as robots, experts and the general public agree that robots must be able to move, operate a mechanical arm, sense and manipulate their environment, and exhibit intelligent behavior, especially if it resembles that of humans or other animals. A robot is now defined as a computer with the ability and purpose of movement that is capable of executing many tasks in a flexible manner according to its programming, distinguishing it from a specialized household appliance.

Although there has been a long history of stories about mechanical aids and companions, as well as attempts to make them, truly autonomous machines did not arise until the twentieth century. The Unimate, the first programmable, digitally directed robot, was installed in 1961 to lift and put hot metal pieces from a dyeing process.

Household cleaning and maintenance robots are becoming increasingly popular. However, there is also concern about the economic impact of automation and the threat of robotic armament, which is mirrored in popular culture's sometimes perverted and wicked depictions of robots. Real robots are limited in comparison to their fictional counterparts.

Curated By Gerluxe

The Artemis Mission's Flying Mannequin Already Has a Name

 

Commander Moonikin
Image NASA

 The official name of the mannequin that will be launched onboard Artemis I, NASA's unmanned test voyage of the SLS rocket and Orion spacecraft around the Moon later this year, is "Commander Moonikin Campos." A knockout game celebrating NASA figures and projects, as well as astronomical phenomena, gave birth to the moonikin (a play on words for "lunar mannequin"). More than 300,000 people voted for NASA.

Arturo Campos, who played a significant part in bringing Apollo 13 safely back to Earth, is honored by the moniker Campos. The final match was between Campos and Delos, an allusion to the Greek mythological island where Apollo and Artemis were born.

"We are continuously seeking for new methods to include the public in our missions, and our return to the Moon via Artemis is a worldwide effort." "This event pays honor to an essential member of our NASA family - Arturo Campos," said Brian Odom, NASA's acting chief historian at Marshall Space Flight Center in Huntsville, Alabama. "It's a fitting homage to Campos that the data from Artemis I will aid in our preparations to send humans to the Moon, including the first woman and the first person of color, to train for a mission to Mars."
 

Campos will wear two radiation sensors and additional acceleration and vibration sensors under his headrest and below his seat throughout the duration of the flight. The lunar dummy's expertise will help NASA protect astronauts during Artemis II, the first mission to put a crew into orbit around the Moon in more than 50 years.

One of three "passengers" that will go onboard Orion to test the spacecraft's systems is the "lunar mannequin." Two phantoms, feminine human torso models, will also be on board. The Matroshka AstroRad Radiation Experiment (MARE), a study to give data on radiation levels occuring during lunar missions, will be supported by "Zohar" and "Helga," named by the Israel Space Agency (ISA) and the German Aerospace Center (DLR), respectively.

The SLS rocket and Orion spacecraft, as well as a commercial human landing mechanism and the Gateway Lunar Orbital Link Station, are all critical components of NASA's deep space exploration goals. NASA's Artemis program aims to establish the first long-term presence on and around the Moon in collaboration with commercial and international partners. NASA will use the Moon for humanity's next major leap: sending the first astronauts to Mars, using robots and humans to explore further than ever before.

Curated By Gerluxe

A biped robot 5 kilometers run

 On a single battery charge, this biped robot can run 5 kilometers in under an hour. image: Wikimedia For the time being, it won't be ab...