id
stringlengths 1
169
| pr-title
stringlengths 2
190
| pr-article
stringlengths 0
65k
| pr-summary
stringlengths 47
4.27k
| sc-title
stringclasses 2
values | sc-article
stringlengths 0
2.03M
| sc-abstract
stringclasses 2
values | sc-section_names
sequencelengths 0
0
| sc-sections
sequencelengths 0
0
| sc-authors
sequencelengths 0
0
| source
stringclasses 2
values | Topic
stringclasses 10
values | Citation
stringlengths 4
4.58k
| Paper_URL
stringlengths 4
213
| News_URL
stringlengths 4
119
| pr-summary-and-article
stringlengths 49
66.1k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
517 | Algorithm Uses Online Ads to Identify Human Traffickers | Researchers at Carnegie Mellon University (CMU) and Canada's McGill University hope to identify human trafficking by adapting an algorithm, originally used to spot data anomalies, to detect similarities across escort ads. CMU's Christos Faloutsos said the InfoShield algorithm scans public datasets and clusters textual similarities, and could help law enforcement direct probes and better identify human traffickers and their victims. Said Faloutsos, "Our algorithm can put the millions of advertisements together and highlight the common parts. If they have a lot of things in common, it's not guaranteed, but it's highly likely that it is something suspicious." When tested on a set of escort listings in which experts had already identified trafficking, the algorithm flagged them with 85% precision, while producing no false positives. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Carnegie Mellon University (CMU) and Canada's McGill University hope to identify human trafficking by adapting an algorithm, originally used to spot data anomalies, to detect similarities across escort ads. CMU's Christos Faloutsos said the InfoShield algorithm scans public datasets and clusters textual similarities, and could help law enforcement direct probes and better identify human traffickers and their victims. Said Faloutsos, "Our algorithm can put the millions of advertisements together and highlight the common parts. If they have a lot of things in common, it's not guaranteed, but it's highly likely that it is something suspicious." When tested on a set of escort listings in which experts had already identified trafficking, the algorithm flagged them with 85% precision, while producing no false positives.
|
||||
518 | Which City Rates as Best Place to Work in Tech? | For years now, tech professionals reportedly have been trading the San Francisco Bay Area for more inviting places to live. These reports, at least on a broad scale, didn't ring particularly true until the pandemic, belied by the ever-increasing Silicon Valley traffic and tight housing market.
But now with vast numbers of tech professionals working remotely and tech companies moving at least their headquarter operations outside of Silicon Valley, it is less certain that the prime tech hub will remain near San Francisco. Rents in San Francisco , for example, are still down from a year ago, though they've been picking up since February.
Where are the engineers and other tech professionals going? The exact answer to that question depends on who you ask. According to recent data from MoveBuddha , the top five destinations for relocating Bay Area residents in 2020 were Texas, Washington, New York, Colorado, and Florida; a previous study that looked more specifically at San Francisco residents relocating to other cities put New York City, Los Angeles, Seattle, Brooklyn (somehow considering that borough different from New York City as a whole), Austin, and Chicago on top. According to job search firm Dice , the top cities for tech employment these days are New York, San Francisco, Chicago, Atlanta, and Los Angeles.
So that, roughly, is where U.S. tech professionals are. This month, Blind , the anonymous social network for professionals, set out to find out how several of these U.S. tech hubs stack up in the eyes of the tech professionals living and working there. It surveyed 1085 Blind users at tech companies in Austin, Chicago, New York City, the San Francisco Bay Area, and Seattle, asking them to rate their own city for tech friendliness in several categories.
Unsurprisingly, people had a lot of love for the place in which they had chosen to live; a majority in each city rated that location high in most categories. But differences did emerge. Austin, for example, came out on top as place where the local government makes an effort to help tech workers and companies thrive; New York is the best place for finding fellow techies to hang out with, and the San Francisco Bay Area still offers the most in terms of career opportunities. | Professional social network Blind surveyed 1,085 users at technology firms in Austin, Chicago, New York City (NYC), the San Francisco Bay Area, and Seattle, in order to determine which cities they think treat tech workers best. Although most respondents in each city rated that location high in a majority of categories, differences did emerge. Austin was rated the top city where local government helps tech workers and companies prosper. NYC ranked highest as a place where fellow techies can be found to socialize with, while respondents indicated they saw the San Francisco Bay Area as having the most tech career opportunities. | [] | [] | [] | scitechnews | None | None | None | None | Professional social network Blind surveyed 1,085 users at technology firms in Austin, Chicago, New York City (NYC), the San Francisco Bay Area, and Seattle, in order to determine which cities they think treat tech workers best. Although most respondents in each city rated that location high in a majority of categories, differences did emerge. Austin was rated the top city where local government helps tech workers and companies prosper. NYC ranked highest as a place where fellow techies can be found to socialize with, while respondents indicated they saw the San Francisco Bay Area as having the most tech career opportunities.
For years now, tech professionals reportedly have been trading the San Francisco Bay Area for more inviting places to live. These reports, at least on a broad scale, didn't ring particularly true until the pandemic, belied by the ever-increasing Silicon Valley traffic and tight housing market.
But now with vast numbers of tech professionals working remotely and tech companies moving at least their headquarter operations outside of Silicon Valley, it is less certain that the prime tech hub will remain near San Francisco. Rents in San Francisco , for example, are still down from a year ago, though they've been picking up since February.
Where are the engineers and other tech professionals going? The exact answer to that question depends on who you ask. According to recent data from MoveBuddha , the top five destinations for relocating Bay Area residents in 2020 were Texas, Washington, New York, Colorado, and Florida; a previous study that looked more specifically at San Francisco residents relocating to other cities put New York City, Los Angeles, Seattle, Brooklyn (somehow considering that borough different from New York City as a whole), Austin, and Chicago on top. According to job search firm Dice , the top cities for tech employment these days are New York, San Francisco, Chicago, Atlanta, and Los Angeles.
So that, roughly, is where U.S. tech professionals are. This month, Blind , the anonymous social network for professionals, set out to find out how several of these U.S. tech hubs stack up in the eyes of the tech professionals living and working there. It surveyed 1085 Blind users at tech companies in Austin, Chicago, New York City, the San Francisco Bay Area, and Seattle, asking them to rate their own city for tech friendliness in several categories.
Unsurprisingly, people had a lot of love for the place in which they had chosen to live; a majority in each city rated that location high in most categories. But differences did emerge. Austin, for example, came out on top as place where the local government makes an effort to help tech workers and companies thrive; New York is the best place for finding fellow techies to hang out with, and the San Francisco Bay Area still offers the most in terms of career opportunities. |
|||
520 | Multiple Agencies Breached by Hackers Using Pulse Secure Vulnerabilities | Federal authorities announced Tuesday that hackers breached multiple government agencies and other critical organizations by exploiting vulnerabilities in products from a Utah-based software company.
"CISA is aware of compromises affecting U.S. government agencies, critical infrastructure entities, and other private sector organizations by a cyber threat actor - or actors - beginning in June 2020 or earlier related vulnerabilities in certain Ivanti Pulse Connect Secure products," the Cybersecurity and Infrastructure Security Agency (CISA) said in an alert .
The agency, the cybersecurity arm of the Department of Homeland Security, noted that it had been assisting compromised organizations since March 31 and that the hackers used vulnerabilities to place webshells in the Pulse Connect Secure products, which allowed them to bypass passwords, multifactor authentication and other security features.
The agency wrote that Ivanti was developing a patch for these vulnerabilities and that it "strongly encouraged" all organizations using these products to update to the newest version and investigate for signs of compromise.
In addition, CISA put out an emergency directive Tuesday night requiring all federal agencies to assess how many Pulse Connect Secure products they and third-party organizations used and to update these products by April 23.
"CISA has determined that this exploitation of Pulse Connect Secure products poses an unacceptable risk to Federal Civilian Executive Branch agencies and requires emergency action," the agency wrote in the directive. "This determination is based on the current exploitation of these vulnerabilities by threat actors in external network environments, the likelihood of the vulnerabilities being exploited, the prevalence of the affected software in the federal enterprise, the high potential for a compromise of agency information systems, and the potential impact of a successful compromise."
The alert was released after cybersecurity group FireEye's Mandiant Solutions, which is working with Ivanti to respond to the hacking incident, published a blog post attributing some of the hacking activity to a Chinese state-sponsored hacking group and another Chinese advanced persistent threat group.
Mandiant found that the hacking group had targeted organizations in the U.S. Defense Industrial Base and European organizations and stressed that it was in the "early stages" of full attribution.
A spokesperson for Ivanti told The Hill Tuesday that the patch for the vulnerabilities would be released in May and that only a "limited number" of customers had been compromised.
"The Pulse Connect Secure (PCS) team is in contact with a limited number of customers who have experienced evidence of exploit behavior on their PCS appliances," the spokesperson told The Hill in a statement. "The PCS team has provided remediation guidance to these customers directly."
The company also published a blog post detailing more about the vulnerabilities, noting that it was working with CISA, FireEye and other leading industry experts to investigate the hacking incident.
"A secure computing environment is more important each and every day to how we work and live, as threats evolve and emerge," PCS Chief Security Officer Phil Richards wrote in the blog post. "We are making significant investments to enhance our overall cyber security infrastructure, including evolving standards of code development and conducting a full code integrity review."
The new breach comes on the heels of two other major security incidents that CISA has helped respond to over the past four months.
The SolarWinds hack, carried out by Russian hackers and first discovered in December, compromised nine federal agencies and 100 private sector groups. The response to this was compounded when Microsoft announced new vulnerabilities in its Exchange Server application that were used by at least one Chinese hacking group to compromise thousands of organizations.
CISA issued alerts ordering all federal agencies to investigate for signs of compromise in both hacking incidents and patch their systems and was one of four federal agencies in a unified coordination group that was formed to investigate each incident.
A senior Biden administration official announced earlier this week that the group would be "standing down" due to a reduction in victims. President Biden Joe Biden Former ISIS member pleads guilty to kidnappings, deaths of Americans Defense & National Security - The mental scars of Afghanistan Bidens visit wounded service members at Walter Reed MORE also plans to shortly sign an executive order aimed at shoring up federal cybersecurity.
Updated at 7:32 p.m. | The U.S. Cybersecurity and Infrastructure Security Agency (CISA) said hackers had infiltrated federal agencies and other critical organizations by exploiting flaws in products from Utah-based software company Ivanti Pulse Connect Secure (PCS). The CISA alert followed cybersecurity group FireEye's Mandiant Solutions' publication of a blog post attributing some breaches to a Chinese state-sponsored hacking group and another Chinese advanced persistent threat group. CISA said that hackers had installed webshells in PCS products, which enabled them to circumvent security features. The agency said Ivanti was developing a patch, adding that it "strongly encouraged" all users to update to the latest version of the software and to look for signs of breaches. CISA issued an emergency directive requiring all federal agencies evaluate how many PCS products they and third-party organizations used, and to update them by April 23. | [] | [] | [] | scitechnews | None | None | None | None | The U.S. Cybersecurity and Infrastructure Security Agency (CISA) said hackers had infiltrated federal agencies and other critical organizations by exploiting flaws in products from Utah-based software company Ivanti Pulse Connect Secure (PCS). The CISA alert followed cybersecurity group FireEye's Mandiant Solutions' publication of a blog post attributing some breaches to a Chinese state-sponsored hacking group and another Chinese advanced persistent threat group. CISA said that hackers had installed webshells in PCS products, which enabled them to circumvent security features. The agency said Ivanti was developing a patch, adding that it "strongly encouraged" all users to update to the latest version of the software and to look for signs of breaches. CISA issued an emergency directive requiring all federal agencies evaluate how many PCS products they and third-party organizations used, and to update them by April 23.
Federal authorities announced Tuesday that hackers breached multiple government agencies and other critical organizations by exploiting vulnerabilities in products from a Utah-based software company.
"CISA is aware of compromises affecting U.S. government agencies, critical infrastructure entities, and other private sector organizations by a cyber threat actor - or actors - beginning in June 2020 or earlier related vulnerabilities in certain Ivanti Pulse Connect Secure products," the Cybersecurity and Infrastructure Security Agency (CISA) said in an alert .
The agency, the cybersecurity arm of the Department of Homeland Security, noted that it had been assisting compromised organizations since March 31 and that the hackers used vulnerabilities to place webshells in the Pulse Connect Secure products, which allowed them to bypass passwords, multifactor authentication and other security features.
The agency wrote that Ivanti was developing a patch for these vulnerabilities and that it "strongly encouraged" all organizations using these products to update to the newest version and investigate for signs of compromise.
In addition, CISA put out an emergency directive Tuesday night requiring all federal agencies to assess how many Pulse Connect Secure products they and third-party organizations used and to update these products by April 23.
"CISA has determined that this exploitation of Pulse Connect Secure products poses an unacceptable risk to Federal Civilian Executive Branch agencies and requires emergency action," the agency wrote in the directive. "This determination is based on the current exploitation of these vulnerabilities by threat actors in external network environments, the likelihood of the vulnerabilities being exploited, the prevalence of the affected software in the federal enterprise, the high potential for a compromise of agency information systems, and the potential impact of a successful compromise."
The alert was released after cybersecurity group FireEye's Mandiant Solutions, which is working with Ivanti to respond to the hacking incident, published a blog post attributing some of the hacking activity to a Chinese state-sponsored hacking group and another Chinese advanced persistent threat group.
Mandiant found that the hacking group had targeted organizations in the U.S. Defense Industrial Base and European organizations and stressed that it was in the "early stages" of full attribution.
A spokesperson for Ivanti told The Hill Tuesday that the patch for the vulnerabilities would be released in May and that only a "limited number" of customers had been compromised.
"The Pulse Connect Secure (PCS) team is in contact with a limited number of customers who have experienced evidence of exploit behavior on their PCS appliances," the spokesperson told The Hill in a statement. "The PCS team has provided remediation guidance to these customers directly."
The company also published a blog post detailing more about the vulnerabilities, noting that it was working with CISA, FireEye and other leading industry experts to investigate the hacking incident.
"A secure computing environment is more important each and every day to how we work and live, as threats evolve and emerge," PCS Chief Security Officer Phil Richards wrote in the blog post. "We are making significant investments to enhance our overall cyber security infrastructure, including evolving standards of code development and conducting a full code integrity review."
The new breach comes on the heels of two other major security incidents that CISA has helped respond to over the past four months.
The SolarWinds hack, carried out by Russian hackers and first discovered in December, compromised nine federal agencies and 100 private sector groups. The response to this was compounded when Microsoft announced new vulnerabilities in its Exchange Server application that were used by at least one Chinese hacking group to compromise thousands of organizations.
CISA issued alerts ordering all federal agencies to investigate for signs of compromise in both hacking incidents and patch their systems and was one of four federal agencies in a unified coordination group that was formed to investigate each incident.
A senior Biden administration official announced earlier this week that the group would be "standing down" due to a reduction in victims. President Biden Joe Biden Former ISIS member pleads guilty to kidnappings, deaths of Americans Defense & National Security - The mental scars of Afghanistan Bidens visit wounded service members at Walter Reed MORE also plans to shortly sign an executive order aimed at shoring up federal cybersecurity.
Updated at 7:32 p.m. |
|||
521 | ML Model Generates Realistic Seismic Waveforms | LOS ALAMOS, N.M., April 22, 2021 - A new machine-learning model that generates realistic seismic waveforms will reduce manual labor and improve earthquake detection, according to a study published recently in JGR Solid Earth .
"To verify the efficacy of our generative model, we applied it to seismic field data collected in Oklahoma," said Youzuo Lin, a computational scientist in Los Alamos National Laboratory's Geophysics group and principal investigator of the project. "Through a sequence of qualitative and quantitative tests and benchmarks, we saw that our model can generate high-quality synthetic waveforms and improve machine learning-based earthquake detection algorithms."
Quickly and accurately detecting earthquakes can be a challenging task. Visual detection done by people has long been considered the gold standard, but requires intensive manual labor that scales poorly to large data sets. In recent years, automatic detection methods based on machine learning have improved the accuracy and efficiency of data collection; however, the accuracy of those methods relies on access to a large amount of high‐quality, labeled training data, often tens of thousands of records or more.
To resolve this data dilemma, the research team developed SeismoGen based on a generative adversarial network (GAN), which is a type of deep generative model that can generate high‐quality synthetic samples in multiple domains. In other words, deep generative models train machines to do things and create new data that could pass as real.
Once trained, the SeismoGen model is capable of producing realistic seismic waveforms of multiple labels. When applied to real Earth seismic datasets in Oklahoma, the team saw that data augmentation from SeismoGen‐generated synthetic waveforms could be used to improve earthquake detection algorithms in instances when only small amounts of labeled training data are available.
Paper: SeismoGen: Seismic Waveform Synthesis Using GAN with Application to Seismic Data Augmentation, Tiantong Wang, Daniel Trugman, and Youzuo Lin. Published in JGR Solid Earth. April, Volume 126, Issue 4, e2020JB020077, 2021. DOI: 10.1029/2020JB020077
Funding: The research was supported by the Center for Space and Earth Science and the Laboratory Directed Research and Development program under project number of 20210542MFR at Los Alamos National Laboratory.
LA-UR-21-23818 | The SeismoGen machine learning model can generate high-quality synthetic seismic waveforms, according to researchers at the U.S. Department of Energy's Los Alamos National Laboratory (LANL). The team designed SeismoGen based on a generative adversarial network. Once trained, the SeismoGen model can produce realistic seismic waveforms of multiple labels. The LANL researchers applied the model to actual Earth seismic datasets in Oklahoma. LANL's Youzuo Lin said, "Through a sequence of qualitative and quantitative tests and benchmarks, we saw that our model can generate high-quality synthetic waveforms and improve machine learning-based earthquake detection algorithms." | [] | [] | [] | scitechnews | None | None | None | None | The SeismoGen machine learning model can generate high-quality synthetic seismic waveforms, according to researchers at the U.S. Department of Energy's Los Alamos National Laboratory (LANL). The team designed SeismoGen based on a generative adversarial network. Once trained, the SeismoGen model can produce realistic seismic waveforms of multiple labels. The LANL researchers applied the model to actual Earth seismic datasets in Oklahoma. LANL's Youzuo Lin said, "Through a sequence of qualitative and quantitative tests and benchmarks, we saw that our model can generate high-quality synthetic waveforms and improve machine learning-based earthquake detection algorithms."
LOS ALAMOS, N.M., April 22, 2021 - A new machine-learning model that generates realistic seismic waveforms will reduce manual labor and improve earthquake detection, according to a study published recently in JGR Solid Earth .
"To verify the efficacy of our generative model, we applied it to seismic field data collected in Oklahoma," said Youzuo Lin, a computational scientist in Los Alamos National Laboratory's Geophysics group and principal investigator of the project. "Through a sequence of qualitative and quantitative tests and benchmarks, we saw that our model can generate high-quality synthetic waveforms and improve machine learning-based earthquake detection algorithms."
Quickly and accurately detecting earthquakes can be a challenging task. Visual detection done by people has long been considered the gold standard, but requires intensive manual labor that scales poorly to large data sets. In recent years, automatic detection methods based on machine learning have improved the accuracy and efficiency of data collection; however, the accuracy of those methods relies on access to a large amount of high‐quality, labeled training data, often tens of thousands of records or more.
To resolve this data dilemma, the research team developed SeismoGen based on a generative adversarial network (GAN), which is a type of deep generative model that can generate high‐quality synthetic samples in multiple domains. In other words, deep generative models train machines to do things and create new data that could pass as real.
Once trained, the SeismoGen model is capable of producing realistic seismic waveforms of multiple labels. When applied to real Earth seismic datasets in Oklahoma, the team saw that data augmentation from SeismoGen‐generated synthetic waveforms could be used to improve earthquake detection algorithms in instances when only small amounts of labeled training data are available.
Paper: SeismoGen: Seismic Waveform Synthesis Using GAN with Application to Seismic Data Augmentation, Tiantong Wang, Daniel Trugman, and Youzuo Lin. Published in JGR Solid Earth. April, Volume 126, Issue 4, e2020JB020077, 2021. DOI: 10.1029/2020JB020077
Funding: The research was supported by the Center for Space and Earth Science and the Laboratory Directed Research and Development program under project number of 20210542MFR at Los Alamos National Laboratory.
LA-UR-21-23818 |
|||
523 | Stanford Researchers Use AI to Empower Environmental Regulators | By Rob Jordan Stanford Woods Institute for the Environment
Like superheroes capable of seeing through obstacles, environmental regulators may soon wield the power of all-seeing eyes that can identify violators anywhere at any time, according to a new Stanford University-led study . The paper, published the week of April 19 in Proceedings of the National Academy of Sciences (PNAS) , demonstrates how artificial intelligence combined with satellite imagery can provide a low-cost, scalable method for locating and monitoring otherwise hard-to-regulate industries.
"Brick kilns have proliferated across Bangladesh to supply the growing economy with construction materials, which makes it really hard for regulators to keep up with new kilns that are constructed," said co-lead author Nina Brooks, a postdoctoral associate at the University of Minnesota's Institute for Social Research and Data Innovation who did the research while a PhD student at Stanford.
While previous research has shown the potential to use machine learning and satellite observations for environmental regulation, most studies have focused on wealthy countries with dependable data on industrial locations and activities. To explore the feasibility in developing countries, the Stanford-led research focused on Bangladesh, where government regulators struggle to locate highly pollutive informal brick kilns, let alone enforce rules.
Bricks are key to development across South Asia, especially in regions that lack other construction materials, and the kilns that make them employ millions of people. However, their highly inefficient coal burning presents major health and environmental risks. In Bangladesh, brick kilns are responsible for 17 percent of the country's total annual carbon dioxide emissions, and in Dhaka, the country's most populous city, up to half of the small particulate matter considered especially dangerous to human lungs. It's a significant contributor to the country's overall air pollution, which is estimated to reduce Bangladeshis' average life expectancy by almost two years.
"Air pollution kills seven million people every year," said study senior author Stephen Luby , a professor of infectious diseases at Stanford's School of Medicine . "We need to identify the sources of this pollution and reduce these emissions."
Bangladesh government regulators are attempting to manually map and verify the locations of brick kilns across the country, but the effort is incredibly time and labor intensive. It's also highly inefficient because of the rapid proliferation of kilns. The work is also likely to suffer from inaccuracy and bias, as government data in low-income countries often does, according to the researchers.
Since 2016, Brooks, Luby and other Stanford researchers have worked in Bangladesh to pinpoint kiln locations, quantify brick kilns' adverse health effects and provide transparent public information to inform political change. They had developed an approach using infrared to pick out coal-burning kilns from remotely sensed data. While promising, the approach had serious flaws, such as the inability to distinguish between kilns and heat-trapping agricultural land.
Working with Stanford computer scientists and engineers, as well as scientists at the International Centre for Diarrheal Disease Research, Bangladesh (icddr,b), the team shifted focus to machine learning.
Building on past applications of deep-learning to environmental monitoring, and on specific efforts to use deep learning to identify brick kilns, they developed a highly accurate algorithm that not only identifies whether images contain kilns but also learns to localize kilns within the image. The method rebuilds kilns that have been fragmented across multiple images - an inherent problem with satellite imagery - and is able to identify when multiple kilns are contained within a single image. The researchers are also able to distinguish between two kiln technologies - one of which is banned - based on shape classification.
The approach revealed that more than three-fourths of kilns in Bangladesh are illegally constructed within 1 kilometer (0.6 mile) of a school, and almost 10 percent are illegally close to health facilities. It also showed that the government systematically underreports kilns with respect to regulations and - according to the shape classification findings - overreports the percentage of kilns using a newer, cleaner technology relative to an older, banned approach. The researchers also found higher numbers of registered kilns in districts adjacent to the banned districts, suggesting kilns are formally registered in the districts where they are legal but constructed across district borders.
The researchers are working to improve the approach's limitations by developing ways to use lower resolution imagery as well as expand their work to other regions where bricks are constructed similarly. Getting it right could make a big difference. In Bangladesh alone, almost everyone lives within 10 kilometers (6.2 miles) of a brick kiln, and more than 18 million - more than twice the population of New York City - live within 1 kilometer (0.6 mile), according to the researchers estimates.
"We are hopeful our general approach can enable more effective regulation and policies to achieve better health and environmental outcomes in the future," said co-lead author Jihyeon Lee, BS '19, a researcher in Stanford's Sustainability and Artificial Intelligence Lab .
Luby is also a senior fellow at the Stanford Woods Institute for the Environment and the Freeman Spogli Institute for International Studies and a member of Stanford Bio-X and the Stanford Maternal & Child Health Research Institute . Co-authors of the study also include Fahim Tajwar, an undergraduate student in computer science; Marshall Burke , an associate professor of Earth system science in Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth) and a senior fellow at the Freeman Spogli Institute for International Studies, at the Stanford Woods Institute for the Environment and at the Stanford Institute for Economic Policy Research ; Stefano Ermon , an assistant professor of computer science in Stanford's School of Engineering and a center fellow at the Stanford Woods Institute for the Environment; David Lobell , the Gloria and Richard Kushel Director of the Center on Food Security and the Environment , a professor of Earth system science , the William Wrigley Senior Fellow at the Stanford Woods Institute for the Environment and at the Freeman Spogli Institute for International Studies, and a senior fellow at the Stanford Institute for Economic Policy Research; and Debashish Biswas, an assistant scientist at the International Centre for Diarrheal Disease Research, Bangladesh ( icddr,b ).
The research was funded by the Stanford King Center on Global Development , the Stanford Woods Institute for the Environment and the National Science Foundation.
To read all stories about Stanford science, subscribe to the biweekly Stanford Science Digest .
-30- | Stanford University researchers have demonstrated how artificial intelligence combined with satellite imagery creates a low-cost, scalable method for finding and monitoring otherwise hard-to-oversee industries, which environmental regulators could employ to spot violators. Previous research has tended to focus on wealthy countries, while the Stanford-led work concentrated on Bangladesh, where localization and enforcement of environmental regulations related to highly pollutive brick kilns is difficult. In partnership with the International Center for Diarrheal Disease Research, Bangladesh, the team devised a deep learning algorithm that not only identifies whether satellite images contain such kilns, but also learns to locate kilns within the image. The algorithm can reconstruct kilns fragmented across multiple images, identify multiple kilns within a single image, and differentiate between sanctioned and illegal kiln technologies based on shape classification. | [] | [] | [] | scitechnews | None | None | None | None | Stanford University researchers have demonstrated how artificial intelligence combined with satellite imagery creates a low-cost, scalable method for finding and monitoring otherwise hard-to-oversee industries, which environmental regulators could employ to spot violators. Previous research has tended to focus on wealthy countries, while the Stanford-led work concentrated on Bangladesh, where localization and enforcement of environmental regulations related to highly pollutive brick kilns is difficult. In partnership with the International Center for Diarrheal Disease Research, Bangladesh, the team devised a deep learning algorithm that not only identifies whether satellite images contain such kilns, but also learns to locate kilns within the image. The algorithm can reconstruct kilns fragmented across multiple images, identify multiple kilns within a single image, and differentiate between sanctioned and illegal kiln technologies based on shape classification.
By Rob Jordan Stanford Woods Institute for the Environment
Like superheroes capable of seeing through obstacles, environmental regulators may soon wield the power of all-seeing eyes that can identify violators anywhere at any time, according to a new Stanford University-led study . The paper, published the week of April 19 in Proceedings of the National Academy of Sciences (PNAS) , demonstrates how artificial intelligence combined with satellite imagery can provide a low-cost, scalable method for locating and monitoring otherwise hard-to-regulate industries.
"Brick kilns have proliferated across Bangladesh to supply the growing economy with construction materials, which makes it really hard for regulators to keep up with new kilns that are constructed," said co-lead author Nina Brooks, a postdoctoral associate at the University of Minnesota's Institute for Social Research and Data Innovation who did the research while a PhD student at Stanford.
While previous research has shown the potential to use machine learning and satellite observations for environmental regulation, most studies have focused on wealthy countries with dependable data on industrial locations and activities. To explore the feasibility in developing countries, the Stanford-led research focused on Bangladesh, where government regulators struggle to locate highly pollutive informal brick kilns, let alone enforce rules.
Bricks are key to development across South Asia, especially in regions that lack other construction materials, and the kilns that make them employ millions of people. However, their highly inefficient coal burning presents major health and environmental risks. In Bangladesh, brick kilns are responsible for 17 percent of the country's total annual carbon dioxide emissions, and in Dhaka, the country's most populous city, up to half of the small particulate matter considered especially dangerous to human lungs. It's a significant contributor to the country's overall air pollution, which is estimated to reduce Bangladeshis' average life expectancy by almost two years.
"Air pollution kills seven million people every year," said study senior author Stephen Luby , a professor of infectious diseases at Stanford's School of Medicine . "We need to identify the sources of this pollution and reduce these emissions."
Bangladesh government regulators are attempting to manually map and verify the locations of brick kilns across the country, but the effort is incredibly time and labor intensive. It's also highly inefficient because of the rapid proliferation of kilns. The work is also likely to suffer from inaccuracy and bias, as government data in low-income countries often does, according to the researchers.
Since 2016, Brooks, Luby and other Stanford researchers have worked in Bangladesh to pinpoint kiln locations, quantify brick kilns' adverse health effects and provide transparent public information to inform political change. They had developed an approach using infrared to pick out coal-burning kilns from remotely sensed data. While promising, the approach had serious flaws, such as the inability to distinguish between kilns and heat-trapping agricultural land.
Working with Stanford computer scientists and engineers, as well as scientists at the International Centre for Diarrheal Disease Research, Bangladesh (icddr,b), the team shifted focus to machine learning.
Building on past applications of deep-learning to environmental monitoring, and on specific efforts to use deep learning to identify brick kilns, they developed a highly accurate algorithm that not only identifies whether images contain kilns but also learns to localize kilns within the image. The method rebuilds kilns that have been fragmented across multiple images - an inherent problem with satellite imagery - and is able to identify when multiple kilns are contained within a single image. The researchers are also able to distinguish between two kiln technologies - one of which is banned - based on shape classification.
The approach revealed that more than three-fourths of kilns in Bangladesh are illegally constructed within 1 kilometer (0.6 mile) of a school, and almost 10 percent are illegally close to health facilities. It also showed that the government systematically underreports kilns with respect to regulations and - according to the shape classification findings - overreports the percentage of kilns using a newer, cleaner technology relative to an older, banned approach. The researchers also found higher numbers of registered kilns in districts adjacent to the banned districts, suggesting kilns are formally registered in the districts where they are legal but constructed across district borders.
The researchers are working to improve the approach's limitations by developing ways to use lower resolution imagery as well as expand their work to other regions where bricks are constructed similarly. Getting it right could make a big difference. In Bangladesh alone, almost everyone lives within 10 kilometers (6.2 miles) of a brick kiln, and more than 18 million - more than twice the population of New York City - live within 1 kilometer (0.6 mile), according to the researchers estimates.
"We are hopeful our general approach can enable more effective regulation and policies to achieve better health and environmental outcomes in the future," said co-lead author Jihyeon Lee, BS '19, a researcher in Stanford's Sustainability and Artificial Intelligence Lab .
Luby is also a senior fellow at the Stanford Woods Institute for the Environment and the Freeman Spogli Institute for International Studies and a member of Stanford Bio-X and the Stanford Maternal & Child Health Research Institute . Co-authors of the study also include Fahim Tajwar, an undergraduate student in computer science; Marshall Burke , an associate professor of Earth system science in Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth) and a senior fellow at the Freeman Spogli Institute for International Studies, at the Stanford Woods Institute for the Environment and at the Stanford Institute for Economic Policy Research ; Stefano Ermon , an assistant professor of computer science in Stanford's School of Engineering and a center fellow at the Stanford Woods Institute for the Environment; David Lobell , the Gloria and Richard Kushel Director of the Center on Food Security and the Environment , a professor of Earth system science , the William Wrigley Senior Fellow at the Stanford Woods Institute for the Environment and at the Freeman Spogli Institute for International Studies, and a senior fellow at the Stanford Institute for Economic Policy Research; and Debashish Biswas, an assistant scientist at the International Centre for Diarrheal Disease Research, Bangladesh ( icddr,b ).
The research was funded by the Stanford King Center on Global Development , the Stanford Woods Institute for the Environment and the National Science Foundation.
To read all stories about Stanford science, subscribe to the biweekly Stanford Science Digest .
-30- |
|||
524 | Apple Targeted in $50-Million Ransomware Hack of Supplier Quanta | Taiwan-based Apple contract manufacturer Quanta Computer suffered a ransomware attack apparently by Russian operator REvil, which claimed to have stolen the blueprints of Apple's latest products. A user on the cybercrime forum XSS posted Sunday that REvil was about to declare its "largest attack ever," according to an anonymous source. REvil named Quanta its latest victim on its "Happy Blog" site, claiming it had waited to publicize the breach until Apple's latest product launch because Quanta had refused to pay its ransom demands. By the time the launch ended, REvil had posted schematics for a new laptop, including the workings of what seems to be a Macbook designed as recently as March. | [] | [] | [] | scitechnews | None | None | None | None | Taiwan-based Apple contract manufacturer Quanta Computer suffered a ransomware attack apparently by Russian operator REvil, which claimed to have stolen the blueprints of Apple's latest products. A user on the cybercrime forum XSS posted Sunday that REvil was about to declare its "largest attack ever," according to an anonymous source. REvil named Quanta its latest victim on its "Happy Blog" site, claiming it had waited to publicize the breach until Apple's latest product launch because Quanta had refused to pay its ransom demands. By the time the launch ended, REvil had posted schematics for a new laptop, including the workings of what seems to be a Macbook designed as recently as March.
|
||||
525 | AI Tool Tracks Evolution of Covid-19 Conspiracy Theories on Social Media | LOS ALAMOS, N.M., April 19, 2021 - A new machine-learning program accurately identifies COVID-19-related conspiracy theories on social media and models how they evolved over time - a tool that could someday help public health officials combat misinformation online.
"A lot of machine-learning studies related to misinformation on social media focus on identifying different kinds of conspiracy theories," said Courtney Shelley, a postdoctoral researcher in the Information Systems and Modeling Group at Los Alamos National Laboratory and co-author of the study that was published last week in the Journal of Medical Internet Research.
"Instead, we wanted to create a more cohesive understanding of how misinformation changes as it spreads. Because people tend to believe the first message they encounter, public health officials could someday monitor which conspiracy theories are gaining traction on social media and craft factual public information campaigns to preempt widespread acceptance of falsehoods."
The study, titled "Thought I'd Share First," used publicly available, anonymized Twitter data to characterize four COVID-19 conspiracy theory themes and provide context for each through the first five months of the pandemic.
The four themes the study examined were that 5G cell towers spread the virus; that the Bill and Melinda Gates Foundation engineered or has otherwise malicious intent related to COVID-19; that the virus was bioengineered or was developed in a laboratory; and that the COVID-19 vaccines, which were then all still in development, would be dangerous.
"We began with a dataset of approximately 1.8 million tweets that contained COVID-19 keywords or were from health-related Twitter accounts," said Dax Gerts, a computer scientist also in Los Alamos' Information Systems and Modeling Group and the study's co-author. "From this body of data, we identified subsets that matched the four conspiracy theories using pattern filtering, and hand labeled several hundred tweets in each conspiracy theory category to construct training sets."
Using the data collected for each of the four theories, the team built random forest machine-learning, or artificial intelligence (AI), models that categorized tweets as COVID-19 misinformation or not.
"This allowed us to observe the way individuals talk about these conspiracy theories on social media, and observe changes over time," said Gerts.
The study showed that misinformation tweets contain more negative sentiment when compared to factual tweets and that conspiracy theories evolve over time, incorporating details from unrelated conspiracy theories as well as real-world events.
For example, Bill Gates participated in a Reddit "Ask Me Anything" in March 2020, which highlighted Gates-funded research to develop injectable invisible ink that could be used to record vaccinations. Immediately after, there was an increase in the prominence of words associated with vaccine-averse conspiracy theories suggesting the COVID-19 vaccine would secretly microchip individuals for population control.
Furthermore, the study found that a supervised learning technique could be used to automatically identify conspiracy theories, and that an unsupervised learning approach (dynamic topic modeling) could be used to explore changes in word importance among topics within each theory.
"It's important for public health officials to know how conspiracy theories are evolving and gaining traction over time," said Shelley. "If not, they run the risk of inadvertently publicizing conspiracy theories that might otherwise 'die on the vine.' So, knowing how conspiracy theories are changing and perhaps incorporating other theories or real-world events is important when strategizing how to counter them with factual public information campaigns."
Paper: Gerts D, Shelley C, Parikh N, Pitts T, Watson Ross C, Fairchild G, Vaquera Chavez N, Daughton A. "Thought I'd Share First" and Other Conspiracy Theory Tweets from the COVID-19 Infodemic: Exploratory Study . JMIR Public Health Surveill 2021;7 (4):e26527 DOI: 10.2196/26527
Funding: Los Alamos Laboratory Directed Research and Development fund and National Laboratory Fees Research
Interview with Ashlynn Daughton, an information scientist in the Information Systems and Modeling Group at Los Alamos National Laboratory, and co-author of the study.
LA-UR-21-23512 | Scientists at the U.S. Department of Energy's Los Alamos National Laboratory (LANL) have developed a machine learning (ML) program that accurately identifies Covid-19-associated conspiracy theories on social media, and models their evolution. The team used publicly available, anonymized Twitter data to describe four Covid-19 conspiracy theory themes, and to contextualize each across the first five months of the pandemic. The team constructed random-forest artificial intelligence models that identified tweets as Covid-19 misinformation or not. The scientists learned that a supervised learning technique could automatically identify conspiracy theories, while an unsupervised dynamic topic modeling method could investigate changes in word importance among topics within each theory. | [] | [] | [] | scitechnews | None | None | None | None | Scientists at the U.S. Department of Energy's Los Alamos National Laboratory (LANL) have developed a machine learning (ML) program that accurately identifies Covid-19-associated conspiracy theories on social media, and models their evolution. The team used publicly available, anonymized Twitter data to describe four Covid-19 conspiracy theory themes, and to contextualize each across the first five months of the pandemic. The team constructed random-forest artificial intelligence models that identified tweets as Covid-19 misinformation or not. The scientists learned that a supervised learning technique could automatically identify conspiracy theories, while an unsupervised dynamic topic modeling method could investigate changes in word importance among topics within each theory.
LOS ALAMOS, N.M., April 19, 2021 - A new machine-learning program accurately identifies COVID-19-related conspiracy theories on social media and models how they evolved over time - a tool that could someday help public health officials combat misinformation online.
"A lot of machine-learning studies related to misinformation on social media focus on identifying different kinds of conspiracy theories," said Courtney Shelley, a postdoctoral researcher in the Information Systems and Modeling Group at Los Alamos National Laboratory and co-author of the study that was published last week in the Journal of Medical Internet Research.
"Instead, we wanted to create a more cohesive understanding of how misinformation changes as it spreads. Because people tend to believe the first message they encounter, public health officials could someday monitor which conspiracy theories are gaining traction on social media and craft factual public information campaigns to preempt widespread acceptance of falsehoods."
The study, titled "Thought I'd Share First," used publicly available, anonymized Twitter data to characterize four COVID-19 conspiracy theory themes and provide context for each through the first five months of the pandemic.
The four themes the study examined were that 5G cell towers spread the virus; that the Bill and Melinda Gates Foundation engineered or has otherwise malicious intent related to COVID-19; that the virus was bioengineered or was developed in a laboratory; and that the COVID-19 vaccines, which were then all still in development, would be dangerous.
"We began with a dataset of approximately 1.8 million tweets that contained COVID-19 keywords or were from health-related Twitter accounts," said Dax Gerts, a computer scientist also in Los Alamos' Information Systems and Modeling Group and the study's co-author. "From this body of data, we identified subsets that matched the four conspiracy theories using pattern filtering, and hand labeled several hundred tweets in each conspiracy theory category to construct training sets."
Using the data collected for each of the four theories, the team built random forest machine-learning, or artificial intelligence (AI), models that categorized tweets as COVID-19 misinformation or not.
"This allowed us to observe the way individuals talk about these conspiracy theories on social media, and observe changes over time," said Gerts.
The study showed that misinformation tweets contain more negative sentiment when compared to factual tweets and that conspiracy theories evolve over time, incorporating details from unrelated conspiracy theories as well as real-world events.
For example, Bill Gates participated in a Reddit "Ask Me Anything" in March 2020, which highlighted Gates-funded research to develop injectable invisible ink that could be used to record vaccinations. Immediately after, there was an increase in the prominence of words associated with vaccine-averse conspiracy theories suggesting the COVID-19 vaccine would secretly microchip individuals for population control.
Furthermore, the study found that a supervised learning technique could be used to automatically identify conspiracy theories, and that an unsupervised learning approach (dynamic topic modeling) could be used to explore changes in word importance among topics within each theory.
"It's important for public health officials to know how conspiracy theories are evolving and gaining traction over time," said Shelley. "If not, they run the risk of inadvertently publicizing conspiracy theories that might otherwise 'die on the vine.' So, knowing how conspiracy theories are changing and perhaps incorporating other theories or real-world events is important when strategizing how to counter them with factual public information campaigns."
Paper: Gerts D, Shelley C, Parikh N, Pitts T, Watson Ross C, Fairchild G, Vaquera Chavez N, Daughton A. "Thought I'd Share First" and Other Conspiracy Theory Tweets from the COVID-19 Infodemic: Exploratory Study . JMIR Public Health Surveill 2021;7 (4):e26527 DOI: 10.2196/26527
Funding: Los Alamos Laboratory Directed Research and Development fund and National Laboratory Fees Research
Interview with Ashlynn Daughton, an information scientist in the Information Systems and Modeling Group at Los Alamos National Laboratory, and co-author of the study.
LA-UR-21-23512 |
|||
526 | Smartphone-Powered Emergency Alert System | Researchers at UAB have developed an emergency alert system that uses inexpensive Bluetooth beacons to alert users of hazards during natural disasters. A team of computer science researchers at the University of Alabama at Birmingham have created and tested a new, Bluetooth-based system for disseminating emergency messages in an urban environment.
Led by Ragib Hasan, Ph.D., associate professor and researcher in the College of Arts and Sciences ' Department of Computer Science , the team wanted to fix inefficiencies in how emergency or hazard messages are disseminated. The messages are usually sent to the public through broadcast media or physical signs.
According to the study, 96 percent of adults in the United States own smartphones; but as of Aug. 8, 2016, only 387 wireless emergency alerts were sent by state or local governments - compared to 2 million alerts sent by the National Weather Service.
"During natural disasters, many of our communication infrastructures break down due to power or phone network outages," Hasan said. "Disseminating emergency management information to people and informing them of dangers or evacuation routes is difficult. Our system, InSight, can work during disasters - even in the absence of power, GPS and phone networks - to disseminate alerts and save lives."
InSight was designed to be a beacon easily deployable by first responders with minimal cost that will accurately detect users approaching a hazard sight. The product is composed of a mobile app, the beacons, and a backend server to compute and disseminate the signals that can be received without a connection to the internet. The beacons are inexpensive and easy to deploy during disasters - a first responder can simply throw them into specific locations from a car to quickly mark hazards or evacuation routes.
The beacons were tested in three different emergency situations: a construction site, a traffic intersection and an evacuation route. They were deployed on foot and thrown from a car, both methods being completed in under seven minutes. Results indicated that, when users were on foot or a bike, they received alerts while less than 200 feet away from the beacon.
"Our experiments show the feasibility of deploying the beacons in real-life emergency scenarios," Hasan said. "The long range of the signal ensures that users can get notified of the hazard from a safe distance. It also works during storm, rain or darkness, when visibility is limited. The current warning methods require either a visible sign or a working phone or communication network - neither of which may be useful in such disaster scenarios."
InSight was presented by Hasan and his team at the Institute of Electrical and Electronics Engineers ' 2021 Consumer Communications and Networking Conference , the world's largest tradeshow for consumer technology. The project was funded in part by a grant from the National Science Foundation.
Working with Hasan were Tanveer Islam, Ph.D., the chair of the Department of Emergency Management at Jacksonville State University, and Raiful Hasan, a UAB Ph.D. student working under Hasan's supervision at the UAB SECuRE and Trustworthy Computing, or SECRET, Lab.
"I believe that InSight is both timely and vital in saving human lives during disasters - recent snowstorms and tornado events show how our traditional infrastructure can easily break down, costing precious lives," Hasan said. "Our collaboration with JSU enables the use of computer science technology to solve emergency management problems that have a high impact on society and human lives." | Computer science researchers at the University of Alabama at Birmingham (UAB) and Jacksonville State University have developed and tested a Bluetooth-based smartphone system for disseminating emergency alerts in an urban environment. InSight is a beacon that first responders could easily deploy at low cost that detects users approaching a hazardous site. The system features a mobile application, beacons, and a backend server to compute and distribute signals that can be received without an Internet connection. UAB's Ragib Hasan said, "Our system, InSight, can work during disasters - even in the absence of power, [global positioning systems], and phone networks - to disseminate alerts and save lives." | [] | [] | [] | scitechnews | None | None | None | None | Computer science researchers at the University of Alabama at Birmingham (UAB) and Jacksonville State University have developed and tested a Bluetooth-based smartphone system for disseminating emergency alerts in an urban environment. InSight is a beacon that first responders could easily deploy at low cost that detects users approaching a hazardous site. The system features a mobile application, beacons, and a backend server to compute and distribute signals that can be received without an Internet connection. UAB's Ragib Hasan said, "Our system, InSight, can work during disasters - even in the absence of power, [global positioning systems], and phone networks - to disseminate alerts and save lives."
Researchers at UAB have developed an emergency alert system that uses inexpensive Bluetooth beacons to alert users of hazards during natural disasters. A team of computer science researchers at the University of Alabama at Birmingham have created and tested a new, Bluetooth-based system for disseminating emergency messages in an urban environment.
Led by Ragib Hasan, Ph.D., associate professor and researcher in the College of Arts and Sciences ' Department of Computer Science , the team wanted to fix inefficiencies in how emergency or hazard messages are disseminated. The messages are usually sent to the public through broadcast media or physical signs.
According to the study, 96 percent of adults in the United States own smartphones; but as of Aug. 8, 2016, only 387 wireless emergency alerts were sent by state or local governments - compared to 2 million alerts sent by the National Weather Service.
"During natural disasters, many of our communication infrastructures break down due to power or phone network outages," Hasan said. "Disseminating emergency management information to people and informing them of dangers or evacuation routes is difficult. Our system, InSight, can work during disasters - even in the absence of power, GPS and phone networks - to disseminate alerts and save lives."
InSight was designed to be a beacon easily deployable by first responders with minimal cost that will accurately detect users approaching a hazard sight. The product is composed of a mobile app, the beacons, and a backend server to compute and disseminate the signals that can be received without a connection to the internet. The beacons are inexpensive and easy to deploy during disasters - a first responder can simply throw them into specific locations from a car to quickly mark hazards or evacuation routes.
The beacons were tested in three different emergency situations: a construction site, a traffic intersection and an evacuation route. They were deployed on foot and thrown from a car, both methods being completed in under seven minutes. Results indicated that, when users were on foot or a bike, they received alerts while less than 200 feet away from the beacon.
"Our experiments show the feasibility of deploying the beacons in real-life emergency scenarios," Hasan said. "The long range of the signal ensures that users can get notified of the hazard from a safe distance. It also works during storm, rain or darkness, when visibility is limited. The current warning methods require either a visible sign or a working phone or communication network - neither of which may be useful in such disaster scenarios."
InSight was presented by Hasan and his team at the Institute of Electrical and Electronics Engineers ' 2021 Consumer Communications and Networking Conference , the world's largest tradeshow for consumer technology. The project was funded in part by a grant from the National Science Foundation.
Working with Hasan were Tanveer Islam, Ph.D., the chair of the Department of Emergency Management at Jacksonville State University, and Raiful Hasan, a UAB Ph.D. student working under Hasan's supervision at the UAB SECuRE and Trustworthy Computing, or SECRET, Lab.
"I believe that InSight is both timely and vital in saving human lives during disasters - recent snowstorms and tornado events show how our traditional infrastructure can easily break down, costing precious lives," Hasan said. "Our collaboration with JSU enables the use of computer science technology to solve emergency management problems that have a high impact on society and human lives." |
|||
528 | Amazon Bringing Palm-Scanning Payment System to Whole Foods Stores | Amazon's palm-scanning payment system will be rolled out to a Whole Foods store in Seattle's Capitol Hill neighborhood before expanding to seven other Whole Foods stores in the area in the coming months. About a dozen Amazon physical stores already offer the Amazon One payment system, which allows shoppers who have linked a credit card to their palm print to pay for items by holding their palm over a scanning device. Amazon says the palm-scanning system is "highly secure" and more private than facial recognition and other biometric systems. The company says thousands of people have signed up to use the system at the Amazon stores. | [] | [] | [] | scitechnews | None | None | None | None | Amazon's palm-scanning payment system will be rolled out to a Whole Foods store in Seattle's Capitol Hill neighborhood before expanding to seven other Whole Foods stores in the area in the coming months. About a dozen Amazon physical stores already offer the Amazon One payment system, which allows shoppers who have linked a credit card to their palm print to pay for items by holding their palm over a scanning device. Amazon says the palm-scanning system is "highly secure" and more private than facial recognition and other biometric systems. The company says thousands of people have signed up to use the system at the Amazon stores.
|
||||
529 | New Rules Allowing Small Drones to Fly Over People in U.S. Take Effect | Final rules from the U.S. Federal Aviation Administration that permit small drones to fly over people and at night took effect April 21. The rules also allow drones to fly over moving vehicles in some instances. To address security concerns, remote identification technology (Remote ID) will be required in most cases so drones can be identified from the ground. Drone manufacturers have been given 18 months to begin production of drones with Remote ID, and an additional year has been granted to operators to provide Remote ID. The rules do not require drones to be connected to the Internet to transmit location data, but they must use radio frequency (RF) broadcasting to transmit remote ID messages. U.S. Transportation Secretary Pete Buttigieg called the rules "an important first step in safely and securely managing the growing use of drones in our airspace." | [] | [] | [] | scitechnews | None | None | None | None | Final rules from the U.S. Federal Aviation Administration that permit small drones to fly over people and at night took effect April 21. The rules also allow drones to fly over moving vehicles in some instances. To address security concerns, remote identification technology (Remote ID) will be required in most cases so drones can be identified from the ground. Drone manufacturers have been given 18 months to begin production of drones with Remote ID, and an additional year has been granted to operators to provide Remote ID. The rules do not require drones to be connected to the Internet to transmit location data, but they must use radio frequency (RF) broadcasting to transmit remote ID messages. U.S. Transportation Secretary Pete Buttigieg called the rules "an important first step in safely and securely managing the growing use of drones in our airspace."
|
||||
530 | Researchers Develop Chip That Improves Testing, Tracing for Covid-19 | Jeremy Edwards, director of the Computational Genomics and Technology (CGaT) Laboratory at The University of New Mexico, and his colleagues at Centrillion Technologies in Palo Alto, Calif. and West Virginia University, have developed a chip that provides a simpler and more rapid method of genome sequencing for viruses like COVID-19.
Their research, titled, " Highly Accurate Chip-Based Resequencing of SARS-CoV-2 Clinical Samples " was published recently in the American Chemical Society's Langmuir . As part of the research, scientists created a tiled genome array they developed for rapid and inexpensive full viral genome resequencing and applied their SARS-CoV-2-specific genome tiling array to rapidly and accurately resequenced the viral genome from eight clinical samples acquired from patients in Wyoming that tested positive for SARS-CoV-2. Ultimately, they were able to sequence 95 percent of the genome of each sample with greater than 99.9 percent accuracy.
" This new technology allows for faster and more accurate tracing of COVID and other respiratory viruses, including the appearance of new variants," said Edwards, who is a professor in the UNM Department of Chemistry and Chemical Biology. "With this simple and rapid testing procedure, scientists will be able to more accurately track the progression and better prevent the onset of the next pandemic."
With more than 142 million people worldwide having contracted the virus, vigilant testing and contact tracing are the most effective ways to slow the spread of COVID-19. Traditional methods of clinical testing often produce false positives or negatives, and traditional methods of sequencing are time-consuming and expensive. This new technology will virtually eliminate all of these barriers.
"Since the submission of the paper, the technology has further evolved with improved accuracy and sensitivity," said Edwards. "The chip technology is the best available technology for large-scale viral genome surveillance and monitoring viral variants. This technology could not only help control this pandemic and also prevent future pandemics."
The mission of the Computational Genomics and Technology (CGaT) Laboratory is to provide training in bioinformatics research for undergraduate, master's and Ph.D. students, as well as postdoctoral fellows; provide collaborative research interactions to utilize bioinformatics computing tools for researchers at UNM, and to conduct state-of-the-art and innovative bioinformatics and genomics research within the center. | Researchers at the University of New Mexico (UNM), West Virginia University, and Centrillion Technologies in California have engineered a chip that offers a simpler, faster genome-sequencing process for viruses like Covid-19. The scientists created and applied a tiled genome array to rapidly and accurately resequence the viral genome from eight clinical samples obtained from SARS-CoV-2-positive patients in Wyoming. The team sequenced 95% of the genome of each sample with more than 99.9% accuracy. UNM's Jeremy Edwards said, "With this simple and rapid testing procedure, scientists will be able to more accurately track the progression and better prevent the onset of the next pandemic." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the University of New Mexico (UNM), West Virginia University, and Centrillion Technologies in California have engineered a chip that offers a simpler, faster genome-sequencing process for viruses like Covid-19. The scientists created and applied a tiled genome array to rapidly and accurately resequence the viral genome from eight clinical samples obtained from SARS-CoV-2-positive patients in Wyoming. The team sequenced 95% of the genome of each sample with more than 99.9% accuracy. UNM's Jeremy Edwards said, "With this simple and rapid testing procedure, scientists will be able to more accurately track the progression and better prevent the onset of the next pandemic."
Jeremy Edwards, director of the Computational Genomics and Technology (CGaT) Laboratory at The University of New Mexico, and his colleagues at Centrillion Technologies in Palo Alto, Calif. and West Virginia University, have developed a chip that provides a simpler and more rapid method of genome sequencing for viruses like COVID-19.
Their research, titled, " Highly Accurate Chip-Based Resequencing of SARS-CoV-2 Clinical Samples " was published recently in the American Chemical Society's Langmuir . As part of the research, scientists created a tiled genome array they developed for rapid and inexpensive full viral genome resequencing and applied their SARS-CoV-2-specific genome tiling array to rapidly and accurately resequenced the viral genome from eight clinical samples acquired from patients in Wyoming that tested positive for SARS-CoV-2. Ultimately, they were able to sequence 95 percent of the genome of each sample with greater than 99.9 percent accuracy.
" This new technology allows for faster and more accurate tracing of COVID and other respiratory viruses, including the appearance of new variants," said Edwards, who is a professor in the UNM Department of Chemistry and Chemical Biology. "With this simple and rapid testing procedure, scientists will be able to more accurately track the progression and better prevent the onset of the next pandemic."
With more than 142 million people worldwide having contracted the virus, vigilant testing and contact tracing are the most effective ways to slow the spread of COVID-19. Traditional methods of clinical testing often produce false positives or negatives, and traditional methods of sequencing are time-consuming and expensive. This new technology will virtually eliminate all of these barriers.
"Since the submission of the paper, the technology has further evolved with improved accuracy and sensitivity," said Edwards. "The chip technology is the best available technology for large-scale viral genome surveillance and monitoring viral variants. This technology could not only help control this pandemic and also prevent future pandemics."
The mission of the Computational Genomics and Technology (CGaT) Laboratory is to provide training in bioinformatics research for undergraduate, master's and Ph.D. students, as well as postdoctoral fellows; provide collaborative research interactions to utilize bioinformatics computing tools for researchers at UNM, and to conduct state-of-the-art and innovative bioinformatics and genomics research within the center. |
|||
531 | Study Explores Deep Neural Networks' Visual Perception | A new study from the Centre for Neuroscience (CNS) at the Indian Institute of Science (IISc) explores how well deep neural networks compare to the human brain when it comes to visual perception.
According to an IISc release, deep neural networks are machine learning systems inspired by the network of brain cells or neurons in the human brain, which can be trained to perform specific tasks and have played a pivotal role in helping scientists understand how our brains perceive the things that we see. Despite having evolved significantly over the past decade, they are still nowhere close to performing as well as the human brain in perceiving visual cues, it said.
Deep networks work differently from the human brain. "While complex computation is trivial for them, certain tasks that are relatively easy for humans can be difficult for these networks to complete," it said.
In the recent study, published in Nature Communications, S.P. Arun, Associate Professor at CNS, and his team have compared various qualitative properties of these deep networks with those of the human brain. The team studied 13 different perceptual effects and uncovered previously unknown qualitative differences between deep networks and the human brain.
"An example is the Thatcher effect, a phenomenon where humans find it easier to recognise local feature changes in an upright image, but this becomes difficult when the image is flipped upside-down. Deep networks trained to recognise upright faces showed a Thatcher effect when compared with networks trained to recognise objects. Another visual property of the human brain, called mirror confusion, was tested on these networks. To humans, mirror reflections along the vertical axis appear more similar than those along the horizontal axis," explained the release.
The researchers, it said, found that deep networks also show stronger mirror confusion for vertical compared to horizontally reflected images.
Another phenomenon peculiar to the human brain is that it focuses on coarser details first. This is known as the global advantage effect. For example, in an image of a tree, our brain would first see the tree as a whole before noticing the details of the leaves in it.
Georgin Jacob, first author and PhD student at CNS, said surprisingly, neural networks showed a local advantage. This means that unlike the brain, the networks focus on the finer details of an image first.
"Lots of studies have been showing similarities between deep networks and brains, but no one has really looked at systematic differences," Mr. Arun was quoted as saying.
The IISc release said identifying these differences can push us closer to making these networks more brain-like and help researchers build more robust neural networks that not only perform better, but are also immune to "adversarial attacks" that aim to derail them. | A study by researchers at the Indian Institute of Science (IISc) compared the performance of deep neural networks to that of the human brain in terms of visual perception. The team analyzed 13 different perceptual effects, revealing previously unknown qualitative distinctions between deep networks and the brain. IISc's Georgin Jacob observed a surprising local advantage among neural networks over the brain, in that they concentrate on the finer details of an image first. An IISc press release said identifying these differences can lead to better-performing neural networks that are resistant to adversarial attacks. | [] | [] | [] | scitechnews | None | None | None | None | A study by researchers at the Indian Institute of Science (IISc) compared the performance of deep neural networks to that of the human brain in terms of visual perception. The team analyzed 13 different perceptual effects, revealing previously unknown qualitative distinctions between deep networks and the brain. IISc's Georgin Jacob observed a surprising local advantage among neural networks over the brain, in that they concentrate on the finer details of an image first. An IISc press release said identifying these differences can lead to better-performing neural networks that are resistant to adversarial attacks.
A new study from the Centre for Neuroscience (CNS) at the Indian Institute of Science (IISc) explores how well deep neural networks compare to the human brain when it comes to visual perception.
According to an IISc release, deep neural networks are machine learning systems inspired by the network of brain cells or neurons in the human brain, which can be trained to perform specific tasks and have played a pivotal role in helping scientists understand how our brains perceive the things that we see. Despite having evolved significantly over the past decade, they are still nowhere close to performing as well as the human brain in perceiving visual cues, it said.
Deep networks work differently from the human brain. "While complex computation is trivial for them, certain tasks that are relatively easy for humans can be difficult for these networks to complete," it said.
In the recent study, published in Nature Communications, S.P. Arun, Associate Professor at CNS, and his team have compared various qualitative properties of these deep networks with those of the human brain. The team studied 13 different perceptual effects and uncovered previously unknown qualitative differences between deep networks and the human brain.
"An example is the Thatcher effect, a phenomenon where humans find it easier to recognise local feature changes in an upright image, but this becomes difficult when the image is flipped upside-down. Deep networks trained to recognise upright faces showed a Thatcher effect when compared with networks trained to recognise objects. Another visual property of the human brain, called mirror confusion, was tested on these networks. To humans, mirror reflections along the vertical axis appear more similar than those along the horizontal axis," explained the release.
The researchers, it said, found that deep networks also show stronger mirror confusion for vertical compared to horizontally reflected images.
Another phenomenon peculiar to the human brain is that it focuses on coarser details first. This is known as the global advantage effect. For example, in an image of a tree, our brain would first see the tree as a whole before noticing the details of the leaves in it.
Georgin Jacob, first author and PhD student at CNS, said surprisingly, neural networks showed a local advantage. This means that unlike the brain, the networks focus on the finer details of an image first.
"Lots of studies have been showing similarities between deep networks and brains, but no one has really looked at systematic differences," Mr. Arun was quoted as saying.
The IISc release said identifying these differences can push us closer to making these networks more brain-like and help researchers build more robust neural networks that not only perform better, but are also immune to "adversarial attacks" that aim to derail them. |
|||
533 | Designing Healthy Diets - With Computer Analysis | "Intesti nal bacteria have an imp ortant role to play in health and the development of diseases, and our new mathematical model could be extremely helpful in these areas," says Jens Nielsen , Professor of Systems Biology at Chalmers, who led the research.
The new paper describes how the mathematical model performed when making predictions relating to two earlier clinical studies, one involv ing Swedish infants, and the other adults in Finland with obesity.
The studies involved regular measurements of health indicators, which the researchers compared with the predictions made from their mathematical model - the m odel proved to be highly accurate in predicting multiple variables, including how a switch from liquid to solid food in the Swedish infants affected their intestinal bacterial composition.
T hey also measured how the obese adults' intestinal bacteria changed after a move to a more restricted diet. Again, the model's predictions proved to be reliably accurate.
"These are very encouraging results, which could enable computer-based design for a very complex system. Our model could therefore be used to for creating personalised healthy diets, with the possibility to predict how adding specific bacteria as novel probiotics could impact a patient's health," says Jens Nielsen.
There are many different things that affect how different bacteria grow and function in the intestinal system. For example, which other bacteria are already present and how they interact with each other, as well as how they interact with the host - that is to say, us. The bacteria are also further affected by their environmental factors, such as the diet we eat.
All of these variables make predicting the effect that adding bacteria or making dietary changes will have. One must first understand how these bacteria are likely to act when they enter the intestine or how a change in diet will affect the intestinal composition. Will they be able to grow there or not? How will they interact with and possibly affect the bacteria that are already present in the gut? How do different diets affect the intestinal microbiome?
"The model we have developed is unique because it accounts for all these variables. It combines data on the individual bacteria as well as how they interact. It also includes data on how food travels through the gastrointestinal tract and affects the bacteria along the way in its calculations. The latter can be measured by examining blood samples and looking at metabolites, the end products that are formed when bacteria break down different types of food," says Jens Nielsen.
The data to build the model has been gathered from many years' worth of pre-existing clinical studies. As more data is obtained in the future, the model can be updated with new features, such as descriptions of hormonal responses to dietary intake.
Research on diet and the human microbiome, or intestinal bacterial composition, is a field of research that generates great interest, among both researchers and the general public. Jens Nielsen explains why:
"Changes in the bacterial composition can be associated with or signify a great number of ailments, such as obesity, diabetes, or cardiovascular diseases. It can also affect how the body responds to certain types of cancer treatments or specially developed diets."
Working with the bacterial composition therefore offers the potential to influence the course of diseases and overall health. This can be done through treatment with probiotics - carefully selected bacteria that are believed to contribute to improved health.
In future work, Jens Nielsen and his research group will use the model directly in clinical studies. They are already participating in a study together with Sahlgrenska University Hospital in Sweden, where older women are being treated for osteoporosis with the bacteria Lactobacillus reuteri . It has been seen that some patients respond better to treatment than others, and the new model could be used as part of the analysis to understand why this is so.
Cancer treatment with antibodies is another area where the model could be used to analyse the microbiome, helping to understand its role in why some patients respond well to immunotherapy, and some less so.
"This would be an incredible asset if our model can begin to identify bacteria that could improve the treatment of cancer patients. We believe it could really make a big difference here," says Jens Nielsen.
Text: Susanne Nilsson Lindh and Joshua Worth Illustration: Yen Strandqvist Photo: Martina Butorac
Read the whole study in PNAS: CODY enables quantitatively spatiotemporal predictions on in vivo gut microbial variability induced by diet intervention
More about the study | Researchers at Sweden's Chalmers University of Technology developed a mathematical model that predicts the interaction of intestinal bacteria within the human body. The researchers found the model to be accurate in making predictions related to an earlier clinical study involving Swedish infants and how changing their diets from liquid to solid food affected the composition of their intestinal bacteria, as well as an earlier clinical study of obese adults in Finland and how their intestinal bacteria changed after shifting to a more restricted diet. Chalmers' Jens Nielsen said the model accounts for how added bacteria behave in the intestine and interact with intestinal bacteria, and how the intestinal microbiome is affected by different diets. Nielsen said, "These are very encouraging results, which could enable computer-based design for a very complex system." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Sweden's Chalmers University of Technology developed a mathematical model that predicts the interaction of intestinal bacteria within the human body. The researchers found the model to be accurate in making predictions related to an earlier clinical study involving Swedish infants and how changing their diets from liquid to solid food affected the composition of their intestinal bacteria, as well as an earlier clinical study of obese adults in Finland and how their intestinal bacteria changed after shifting to a more restricted diet. Chalmers' Jens Nielsen said the model accounts for how added bacteria behave in the intestine and interact with intestinal bacteria, and how the intestinal microbiome is affected by different diets. Nielsen said, "These are very encouraging results, which could enable computer-based design for a very complex system."
"Intesti nal bacteria have an imp ortant role to play in health and the development of diseases, and our new mathematical model could be extremely helpful in these areas," says Jens Nielsen , Professor of Systems Biology at Chalmers, who led the research.
The new paper describes how the mathematical model performed when making predictions relating to two earlier clinical studies, one involv ing Swedish infants, and the other adults in Finland with obesity.
The studies involved regular measurements of health indicators, which the researchers compared with the predictions made from their mathematical model - the m odel proved to be highly accurate in predicting multiple variables, including how a switch from liquid to solid food in the Swedish infants affected their intestinal bacterial composition.
T hey also measured how the obese adults' intestinal bacteria changed after a move to a more restricted diet. Again, the model's predictions proved to be reliably accurate.
"These are very encouraging results, which could enable computer-based design for a very complex system. Our model could therefore be used to for creating personalised healthy diets, with the possibility to predict how adding specific bacteria as novel probiotics could impact a patient's health," says Jens Nielsen.
There are many different things that affect how different bacteria grow and function in the intestinal system. For example, which other bacteria are already present and how they interact with each other, as well as how they interact with the host - that is to say, us. The bacteria are also further affected by their environmental factors, such as the diet we eat.
All of these variables make predicting the effect that adding bacteria or making dietary changes will have. One must first understand how these bacteria are likely to act when they enter the intestine or how a change in diet will affect the intestinal composition. Will they be able to grow there or not? How will they interact with and possibly affect the bacteria that are already present in the gut? How do different diets affect the intestinal microbiome?
"The model we have developed is unique because it accounts for all these variables. It combines data on the individual bacteria as well as how they interact. It also includes data on how food travels through the gastrointestinal tract and affects the bacteria along the way in its calculations. The latter can be measured by examining blood samples and looking at metabolites, the end products that are formed when bacteria break down different types of food," says Jens Nielsen.
The data to build the model has been gathered from many years' worth of pre-existing clinical studies. As more data is obtained in the future, the model can be updated with new features, such as descriptions of hormonal responses to dietary intake.
Research on diet and the human microbiome, or intestinal bacterial composition, is a field of research that generates great interest, among both researchers and the general public. Jens Nielsen explains why:
"Changes in the bacterial composition can be associated with or signify a great number of ailments, such as obesity, diabetes, or cardiovascular diseases. It can also affect how the body responds to certain types of cancer treatments or specially developed diets."
Working with the bacterial composition therefore offers the potential to influence the course of diseases and overall health. This can be done through treatment with probiotics - carefully selected bacteria that are believed to contribute to improved health.
In future work, Jens Nielsen and his research group will use the model directly in clinical studies. They are already participating in a study together with Sahlgrenska University Hospital in Sweden, where older women are being treated for osteoporosis with the bacteria Lactobacillus reuteri . It has been seen that some patients respond better to treatment than others, and the new model could be used as part of the analysis to understand why this is so.
Cancer treatment with antibodies is another area where the model could be used to analyse the microbiome, helping to understand its role in why some patients respond well to immunotherapy, and some less so.
"This would be an incredible asset if our model can begin to identify bacteria that could improve the treatment of cancer patients. We believe it could really make a big difference here," says Jens Nielsen.
Text: Susanne Nilsson Lindh and Joshua Worth Illustration: Yen Strandqvist Photo: Martina Butorac
Read the whole study in PNAS: CODY enables quantitatively spatiotemporal predictions on in vivo gut microbial variability induced by diet intervention
More about the study |
|||
536 | Researchers Uncover Advertising Scam Targeting Streaming-TV Apps | Nearly 1 million mobile devices were infected with malware that emulated streaming-TV applications and collected revenue from unwitting advertisers, according to researchers at cybersecurity firm Human Security. The researchers said the orchestrators of this so-called "Pareto" scheme spoofed an average of 650 million ad placement opportunities daily in online ad exchanges, stealing money intended for apps available on streaming-TV platforms run by Roku, Amazon.com, Apple, and Google. The creator of 29 apps underpinning the fraud was identified as TopTop Media, a subsidiary of Israel-based M51 Group. The analysts said the operation could be thwarted if digital ad companies strictly followed industry guidance for tracking the origins of traffic and deployed certain security measures. Human Security's Michael McNally said, "Measurement and security companies will just play whack-a-mole, as long as the industry hasn't upgraded to better defenses." | [] | [] | [] | scitechnews | None | None | None | None | Nearly 1 million mobile devices were infected with malware that emulated streaming-TV applications and collected revenue from unwitting advertisers, according to researchers at cybersecurity firm Human Security. The researchers said the orchestrators of this so-called "Pareto" scheme spoofed an average of 650 million ad placement opportunities daily in online ad exchanges, stealing money intended for apps available on streaming-TV platforms run by Roku, Amazon.com, Apple, and Google. The creator of 29 apps underpinning the fraud was identified as TopTop Media, a subsidiary of Israel-based M51 Group. The analysts said the operation could be thwarted if digital ad companies strictly followed industry guidance for tracking the origins of traffic and deployed certain security measures. Human Security's Michael McNally said, "Measurement and security companies will just play whack-a-mole, as long as the industry hasn't upgraded to better defenses."
|
||||
537 | Facial Recognition, Other 'Risky' AI Set for Constraints in EU | The European Commission has proposed new rules constraining the use of facial recognition and other artificial intelligence applications, and threatening fines for companies that fail to comply. The rules would apply to companies that, among other things, exploit vulnerable groups, deploy subliminal techniques, or score people's social behavior. The use of real-time remote biometric identification systems by law enforcement also would be prohibited unless used specifically to prevent a terror attack, find missing children, or for other public security emergencies. Other high-risk applications, including for self-driving cars and in employment or asylum decisions, would have to undergo checks of their systems before deployment. The proposed rules need to be approved by the European Parliament and by individual member-states before they could become law. | [] | [] | [] | scitechnews | None | None | None | None | The European Commission has proposed new rules constraining the use of facial recognition and other artificial intelligence applications, and threatening fines for companies that fail to comply. The rules would apply to companies that, among other things, exploit vulnerable groups, deploy subliminal techniques, or score people's social behavior. The use of real-time remote biometric identification systems by law enforcement also would be prohibited unless used specifically to prevent a terror attack, find missing children, or for other public security emergencies. Other high-risk applications, including for self-driving cars and in employment or asylum decisions, would have to undergo checks of their systems before deployment. The proposed rules need to be approved by the European Parliament and by individual member-states before they could become law.
|
||||
539 | U.S. Takes Steps to Protect Electric System From Cyberattack | The U.S. Department of Energy (DOE) announced Monday a 100-day initiative that aims to protect the nation's electric system from cyberattacks. The initiative calls on owners and operators of power plants and electric utilities to follow concrete milestones to implement technologies that allow for real-time intrusion detection and response. In addition, DOE is requesting feedback from electric utilities, energy companies, government agencies, and others on how to safeguard the energy system supply chain. Energy Secretary Jennifer Granholm said the U.S. "faces a well-documented and increasing cyber threat from malicious actors seeking to disrupt the electricity Americans rely on to power our homes and businesses." | [] | [] | [] | scitechnews | None | None | None | None | The U.S. Department of Energy (DOE) announced Monday a 100-day initiative that aims to protect the nation's electric system from cyberattacks. The initiative calls on owners and operators of power plants and electric utilities to follow concrete milestones to implement technologies that allow for real-time intrusion detection and response. In addition, DOE is requesting feedback from electric utilities, energy companies, government agencies, and others on how to safeguard the energy system supply chain. Energy Secretary Jennifer Granholm said the U.S. "faces a well-documented and increasing cyber threat from malicious actors seeking to disrupt the electricity Americans rely on to power our homes and businesses."
|
||||
540 | Color-Changing Beetle Inspires Algorithm for Efficient Engineering | An algorithm inspired by the way the golden tortoise beetle changes color can address engineering challenges faster than other approaches, according to researchers at Iran's University of Tabriz. The male beetle's ability to change its wing casings' hue to attract females and ward off predators spurred Tabriz's Omid Tarkhaneh and colleagues to generate a virtual landscape that represents all potential solutions to a given problem; a population of virtual beetles inhabits this space, with each beetle's location signaling a possible solution. For each algorithm iteration, the quality of each solution is tested and the color of each virtual beetle changes to represent its viability, with the simulated attraction dynamic causing some or all of the beetles to converge on a position that represents the optimum solution. In applying this algorithm to two common engineering problems, the researchers found it more efficient at finding solutions than five existing nature-inspired evolutionary algorithms. | [] | [] | [] | scitechnews | None | None | None | None | An algorithm inspired by the way the golden tortoise beetle changes color can address engineering challenges faster than other approaches, according to researchers at Iran's University of Tabriz. The male beetle's ability to change its wing casings' hue to attract females and ward off predators spurred Tabriz's Omid Tarkhaneh and colleagues to generate a virtual landscape that represents all potential solutions to a given problem; a population of virtual beetles inhabits this space, with each beetle's location signaling a possible solution. For each algorithm iteration, the quality of each solution is tested and the color of each virtual beetle changes to represent its viability, with the simulated attraction dynamic causing some or all of the beetles to converge on a position that represents the optimum solution. In applying this algorithm to two common engineering problems, the researchers found it more efficient at finding solutions than five existing nature-inspired evolutionary algorithms.
|
||||
541 | U.S. Banks Deploy AI to Monitor Customers, Workers Amid Tech Backlash | April 19 (Reuters) - Several U.S. banks have started deploying camera software that can analyze customer preferences, monitor workers and spot people sleeping near ATMs, even as they remain wary about possible backlash over increased surveillance, more than a dozen banking and technology sources told Reuters.
Previously unreported trials at City National Bank of Florida (BCI.SN) and JPMorgan Chase & Co (JPM.N) as well as earlier rollouts at banks such as Wells Fargo & Co (WFC.N) offer a rare view into the potential U.S. financial institutions see in facial recognition and related artificial intelligence systems.
Widespread deployment of such visual AI tools in the heavily regulated banking sector would be a significant step toward their becoming mainstream in corporate America.
Bobby Dominguez, chief information security officer at City National, said smartphones that unlock via a face scan have paved the way.
"We're already leveraging facial recognition on mobile," he said. "Why not leverage it in the real world?"
City National will begin facial recognition trials early next year to identify customers at teller machines and employees at branches, aiming to replace clunky and less secure authentication measures at its 31 sites, Dominguez said. Eventually, the software could spot people on government watch lists, he said.
JPMorgan said it is "conducting a small test of video analytic technology with a handful of branches in Ohio." Wells Fargo said it works to prevent fraud but declined to discuss how.
Civil liberties issues loom large. Critics point to arrests of innocent individuals following faulty facial matches, disproportionate use of the systems to monitor lower-income and non-white communities, and the loss of privacy inherent in ubiquitous surveillance.
Portland, Oregon, as of Jan. 1 banned businesses from using facial recognition "in places of public accommodation," and drugstore chain Rite Aid Corp (RAD.N) shut a nationwide face recognition program last year.
Dominguez and other bank executives said their deployments are sensitive to the issues.
"We're never going to compromise our clients' privacy," Dominguez said. "We're getting off to an early start on technology already used in other parts of the world and that is rapidly coming to the American banking network."
Still, the big question among banks, said Fredrik Nilsson, vice president of the Americas at Axis Communications, a top maker of surveillance cameras, is "what will be the potential backlash from the public if we roll this out?"
Walter Connors, chief information officer at Brannen Bank, said the Florida company had discussed but not adopted the technology for its 12 locations. "Anybody walking into a branch expects to be recorded," Connors said. "But when you're talking about face recognition, that's a larger conversation."
BUSINESS INTELLIGENCE
JPMorgan began assessing the potential of computer vision in 2019 by using internally developed software to analyze archived footage from Chase branches in New York and Ohio, where one of its two Innovation Labs is located, said two people including former employee Neil Bhandar, who oversaw some of the effort at the time.
Chase aims to gather data to better schedule staff and design branches, three people said and the bank confirmed. Bhandar said some staff even went to one of Amazon.com Inc's (AMZN.O) cashier-less convenience stores to learn about its computer vision system.
Preliminary analysis by Bhandar of branch footage revealed more men would visit before or after lunch, while women tended to arrive mid-afternoon. Bhandar said he also wanted to analyze whether women avoided compact spaces in ATM lobbies because they might bump into someone, but the pandemic halted the plan.
Testing facial recognition to identify clients as they walk into a Chase bank, if they consented to it, has been another possibility considered to enhance their experience, a current employee involved in innovation projects said.
Chase would not be the first to evaluate those uses. A bank in the Northeast recently used computer vision to identify busy areas in branches with newer layouts, an executive there said, speaking on the condition the company not be named.
A Midwestern credit union last year tested facial recognition for client identification at four locations before pausing over cost concerns, a source said.
While Chase developed custom computer vision in-house using components from Google (GOOGL.O) , IBM Watson (IBM.N) and Amazon Web Services, it also considered fully built systems from software startups AnyVision and Vintra, people including Bhandar said. AnyVision declined to comment, and Vintra did not respond to requests for comment.
Chase said it ultimately chose a different vendor, which it declined to name, out of 11 options considered and began testing that company's technology at a handful of Ohio locations last October. The effort aims to identify transaction times, how many people leave because of long queues and which activities are occupying workers.
The bank added that facial, race and gender recognition are not part of this test.
Using technology to guess customers' demographics can be problematic, some ethics experts say, because it reinforces stereotypes. Some computer vision programs also are less accurate on people of color, and critics have warned that could lead to unjust outcomes.
Chase has weighed ethical questions. For instance, some internally called for reconsidering planned testing in Harlem, a historically Black neighborhood in New York, because it could be viewed as racially insensitive, two of the people said. The discussions emerged about the same time as a December 2019 New York Times article about racism at Chase branches in Arizona.
Analyzing race was not part of the eventually tabled plans, and the Harlem branch had been selected because it housed the other Chase Innovation Lab for evaluating new technology, the people said and the bank confirmed.
TARGETING THE HOMELESS
Security uses for computer vision long have stirred banks' interest. Wells Fargo used primitive software from the company 3VR over a decade ago to review footage of crimes and see if any faces matched those of known offenders, said John Honovich, who worked at 3VR and founded video surveillance research organization IPVM.
Identiv, which acquired 3VR in 2018, said banking sales were a major focus, but it declined to comment on Wells Fargo.
A security executive at a mid-sized Southern bank, speaking on the condition of anonymity to discuss secret measures, said over the last 18 months it has rolled out video analytics software at nearly every branch to generate alerts when doors to safes, computer server rooms and other sensitive areas are left open.
Outside, the bank monitors for loitering, such as the recurring issue of people setting up tents under the overhang for drive-through ATMs. Security staff at a control center can play an audio recording politely asking those people to leave, the executive said.
The issue of people sleeping in enclosed ATM lobbies has long been an industry concern, said Brian Karas, vice president of sales at Airship Industries, which develops video management and analytics software.
Systems that detected loitering so staff could activate a siren or strobe light helped increase ATM usage and reduce vandalism for several banks, he said. Though companies did not want to displace people seeking shelter, they felt this was necessary to make ATMs safe and accessible, Karas said.
City National's Dominguez said the bank's branches use computer vision to detect suspicious activity outside.
Sales records from 2010 and 2011 reviewed by Reuters show that Bank of America Corp (BAC.N) purchased "iCVR" cameras, which were marketed at the time as helping organizations reduce loitering in ATM lobbies. Bank of America said it no longer uses iCVR technology.
The Charlotte, North Carolina-based bank's interest in computer vision has not abated. Its officials met with AnyVision on multiple occasions in 2019, including at a September conference during which the startup demonstrated how it could identify the face of a Bank of America executive, according to records of the presentation seen by Reuters and a person in attendance.
The bank said, "We are always reviewing potential new technology solutions that are on the market."
Our Standards: The Thomson Reuters Trust Principles. | Several U.S. banks, including City National Bank of Florida, JPMorgan Chase & Co., and Wells Fargo & Co. are rolling out artificial intelligence systems to analyze customer preferences, monitor employees, and detect suspicious activity near ATMs. City National will commence facial recognition trials in early 2022, with the goal of replacing less-secure authentication systems. JPMorgan is testing video analytic technology at some Ohio branches, and Wells Fargo uses the technology in an effort to prevent fraud. Concerns about the use of such technology range from errors in facial matches and the loss of privacy to disproportionate use of monitoring systems in lower-income and non-white communities. Florida-based Brannen Bank's Walter Connors said, "Anybody walking into a branch expects to be recorded. But when you're talking about face recognition, that's a larger conversation." | [] | [] | [] | scitechnews | None | None | None | None | Several U.S. banks, including City National Bank of Florida, JPMorgan Chase & Co., and Wells Fargo & Co. are rolling out artificial intelligence systems to analyze customer preferences, monitor employees, and detect suspicious activity near ATMs. City National will commence facial recognition trials in early 2022, with the goal of replacing less-secure authentication systems. JPMorgan is testing video analytic technology at some Ohio branches, and Wells Fargo uses the technology in an effort to prevent fraud. Concerns about the use of such technology range from errors in facial matches and the loss of privacy to disproportionate use of monitoring systems in lower-income and non-white communities. Florida-based Brannen Bank's Walter Connors said, "Anybody walking into a branch expects to be recorded. But when you're talking about face recognition, that's a larger conversation."
April 19 (Reuters) - Several U.S. banks have started deploying camera software that can analyze customer preferences, monitor workers and spot people sleeping near ATMs, even as they remain wary about possible backlash over increased surveillance, more than a dozen banking and technology sources told Reuters.
Previously unreported trials at City National Bank of Florida (BCI.SN) and JPMorgan Chase & Co (JPM.N) as well as earlier rollouts at banks such as Wells Fargo & Co (WFC.N) offer a rare view into the potential U.S. financial institutions see in facial recognition and related artificial intelligence systems.
Widespread deployment of such visual AI tools in the heavily regulated banking sector would be a significant step toward their becoming mainstream in corporate America.
Bobby Dominguez, chief information security officer at City National, said smartphones that unlock via a face scan have paved the way.
"We're already leveraging facial recognition on mobile," he said. "Why not leverage it in the real world?"
City National will begin facial recognition trials early next year to identify customers at teller machines and employees at branches, aiming to replace clunky and less secure authentication measures at its 31 sites, Dominguez said. Eventually, the software could spot people on government watch lists, he said.
JPMorgan said it is "conducting a small test of video analytic technology with a handful of branches in Ohio." Wells Fargo said it works to prevent fraud but declined to discuss how.
Civil liberties issues loom large. Critics point to arrests of innocent individuals following faulty facial matches, disproportionate use of the systems to monitor lower-income and non-white communities, and the loss of privacy inherent in ubiquitous surveillance.
Portland, Oregon, as of Jan. 1 banned businesses from using facial recognition "in places of public accommodation," and drugstore chain Rite Aid Corp (RAD.N) shut a nationwide face recognition program last year.
Dominguez and other bank executives said their deployments are sensitive to the issues.
"We're never going to compromise our clients' privacy," Dominguez said. "We're getting off to an early start on technology already used in other parts of the world and that is rapidly coming to the American banking network."
Still, the big question among banks, said Fredrik Nilsson, vice president of the Americas at Axis Communications, a top maker of surveillance cameras, is "what will be the potential backlash from the public if we roll this out?"
Walter Connors, chief information officer at Brannen Bank, said the Florida company had discussed but not adopted the technology for its 12 locations. "Anybody walking into a branch expects to be recorded," Connors said. "But when you're talking about face recognition, that's a larger conversation."
BUSINESS INTELLIGENCE
JPMorgan began assessing the potential of computer vision in 2019 by using internally developed software to analyze archived footage from Chase branches in New York and Ohio, where one of its two Innovation Labs is located, said two people including former employee Neil Bhandar, who oversaw some of the effort at the time.
Chase aims to gather data to better schedule staff and design branches, three people said and the bank confirmed. Bhandar said some staff even went to one of Amazon.com Inc's (AMZN.O) cashier-less convenience stores to learn about its computer vision system.
Preliminary analysis by Bhandar of branch footage revealed more men would visit before or after lunch, while women tended to arrive mid-afternoon. Bhandar said he also wanted to analyze whether women avoided compact spaces in ATM lobbies because they might bump into someone, but the pandemic halted the plan.
Testing facial recognition to identify clients as they walk into a Chase bank, if they consented to it, has been another possibility considered to enhance their experience, a current employee involved in innovation projects said.
Chase would not be the first to evaluate those uses. A bank in the Northeast recently used computer vision to identify busy areas in branches with newer layouts, an executive there said, speaking on the condition the company not be named.
A Midwestern credit union last year tested facial recognition for client identification at four locations before pausing over cost concerns, a source said.
While Chase developed custom computer vision in-house using components from Google (GOOGL.O) , IBM Watson (IBM.N) and Amazon Web Services, it also considered fully built systems from software startups AnyVision and Vintra, people including Bhandar said. AnyVision declined to comment, and Vintra did not respond to requests for comment.
Chase said it ultimately chose a different vendor, which it declined to name, out of 11 options considered and began testing that company's technology at a handful of Ohio locations last October. The effort aims to identify transaction times, how many people leave because of long queues and which activities are occupying workers.
The bank added that facial, race and gender recognition are not part of this test.
Using technology to guess customers' demographics can be problematic, some ethics experts say, because it reinforces stereotypes. Some computer vision programs also are less accurate on people of color, and critics have warned that could lead to unjust outcomes.
Chase has weighed ethical questions. For instance, some internally called for reconsidering planned testing in Harlem, a historically Black neighborhood in New York, because it could be viewed as racially insensitive, two of the people said. The discussions emerged about the same time as a December 2019 New York Times article about racism at Chase branches in Arizona.
Analyzing race was not part of the eventually tabled plans, and the Harlem branch had been selected because it housed the other Chase Innovation Lab for evaluating new technology, the people said and the bank confirmed.
TARGETING THE HOMELESS
Security uses for computer vision long have stirred banks' interest. Wells Fargo used primitive software from the company 3VR over a decade ago to review footage of crimes and see if any faces matched those of known offenders, said John Honovich, who worked at 3VR and founded video surveillance research organization IPVM.
Identiv, which acquired 3VR in 2018, said banking sales were a major focus, but it declined to comment on Wells Fargo.
A security executive at a mid-sized Southern bank, speaking on the condition of anonymity to discuss secret measures, said over the last 18 months it has rolled out video analytics software at nearly every branch to generate alerts when doors to safes, computer server rooms and other sensitive areas are left open.
Outside, the bank monitors for loitering, such as the recurring issue of people setting up tents under the overhang for drive-through ATMs. Security staff at a control center can play an audio recording politely asking those people to leave, the executive said.
The issue of people sleeping in enclosed ATM lobbies has long been an industry concern, said Brian Karas, vice president of sales at Airship Industries, which develops video management and analytics software.
Systems that detected loitering so staff could activate a siren or strobe light helped increase ATM usage and reduce vandalism for several banks, he said. Though companies did not want to displace people seeking shelter, they felt this was necessary to make ATMs safe and accessible, Karas said.
City National's Dominguez said the bank's branches use computer vision to detect suspicious activity outside.
Sales records from 2010 and 2011 reviewed by Reuters show that Bank of America Corp (BAC.N) purchased "iCVR" cameras, which were marketed at the time as helping organizations reduce loitering in ATM lobbies. Bank of America said it no longer uses iCVR technology.
The Charlotte, North Carolina-based bank's interest in computer vision has not abated. Its officials met with AnyVision on multiple occasions in 2019, including at a September conference during which the startup demonstrated how it could identify the face of a Bank of America executive, according to records of the presentation seen by Reuters and a person in attendance.
The bank said, "We are always reviewing potential new technology solutions that are on the market."
Our Standards: The Thomson Reuters Trust Principles. |
|||
542 | Microsoft's Nuance Gambit Shows Healthcare Shaping Up as Next Tech Battleground | Microsoft's $16-billion acquisition of Nuance Communications Inc. comes as the pandemic highlights the healthcare industry's potential as a growth area for technology companies. Analysts say the deal will enable Microsoft to use the speech-recognition software provider as a way to sell more lucrative products and services to its healthcare customers. In addition, Microsoft will be able to integrate the understanding of medical terminology in Nuance's language-processing engine into products like Teams. The Nuance deal follows Amazon's announcement of plans to roll out telehealth services nationwide. Meanwhile, Apple is selling its iPhone and Apple Watch devices to healthcare providers, and Google is working with two medical systems to make health records searchable. Gartner Inc.'s Gregg Pessin said, "The pandemic response by the healthcare industry has proven the value of technology to healthcare delivery. All the digital giants are paying attention." | [] | [] | [] | scitechnews | None | None | None | None | Microsoft's $16-billion acquisition of Nuance Communications Inc. comes as the pandemic highlights the healthcare industry's potential as a growth area for technology companies. Analysts say the deal will enable Microsoft to use the speech-recognition software provider as a way to sell more lucrative products and services to its healthcare customers. In addition, Microsoft will be able to integrate the understanding of medical terminology in Nuance's language-processing engine into products like Teams. The Nuance deal follows Amazon's announcement of plans to roll out telehealth services nationwide. Meanwhile, Apple is selling its iPhone and Apple Watch devices to healthcare providers, and Google is working with two medical systems to make health records searchable. Gartner Inc.'s Gregg Pessin said, "The pandemic response by the healthcare industry has proven the value of technology to healthcare delivery. All the digital giants are paying attention."
|
||||
544 | U.K. Regulator Gives Green Light to Delivery Drone Trials | The U.K. Civil Aviation Authority (CAA) has authorized a trial in which drone company Sees.ai will operate regular drone flights beyond the pilot's line of sight at three remote industrial sites. Remote pilots will fly the drones using only cameras and sensors. If they are successful, the trials could enable drone flights to be rolled out at scale throughout the logistics sector. Sees.ai's John McKenna believes autonomous drones likely will be used initially in industrial settings to monitor rail and road infrastructure or nuclear power plants. He said drone delivery of Amazon packages or pizzas is "still a long way off." | [] | [] | [] | scitechnews | None | None | None | None | The U.K. Civil Aviation Authority (CAA) has authorized a trial in which drone company Sees.ai will operate regular drone flights beyond the pilot's line of sight at three remote industrial sites. Remote pilots will fly the drones using only cameras and sensors. If they are successful, the trials could enable drone flights to be rolled out at scale throughout the logistics sector. Sees.ai's John McKenna believes autonomous drones likely will be used initially in industrial settings to monitor rail and road infrastructure or nuclear power plants. He said drone delivery of Amazon packages or pizzas is "still a long way off."
|
||||
545 | DNA Robots Designed in Minutes Instead of Days | Someday, scientists believe, tiny DNA-based robots and other nanodevices will deliver medicine inside our bodies, detect the presence of deadly pathogens, and help manufacture increasingly smaller electronics.
Researchers took a big step toward that future by developing a new tool that can design much more complex DNA robots and nanodevices than were ever possible before in a fraction of the time.
In a paper published today (April 19, 2021) in the journal Nature Materials , researchers from The Ohio State University - led by former engineering doctoral student Chao-Min Huang - unveiled new software they call MagicDNA.
The software helps researchers design ways to take tiny strands of DNA and combine them into complex structures with parts like rotors and hinges that can move and complete a variety of tasks, including drug delivery.
Researchers have been doing this for a number of years with slower tools with tedious manual steps, said Carlos Castro , co-author of the study and associate professor of mechanical and aerospace engineering at Ohio State .
"But now, nanodevices that may have taken us several days to design before now take us just a few minutes," Castro said.
And now researchers can make much more complex - and useful - nanodevices.
"Previously, we could build devices with up to about six individual components and connect them with joints and hinges and try to make them execute complex motions," said study co-author Hai-Jun Su , professor of mechanical and aerospace engineering at Ohio State.
"With this software, it is not hard to make robots or other devices with upwards of 20 components that are much easier to control. It is a huge step in our ability to design nanodevices that can perform the complex actions that we want them to do."
The software has a variety of advantages that will help scientists design better, more helpful nanodevices and - researchers hope - shorten the time before they are in everyday use.
One advantage is that it allows researchers to carry out the entire design truly in 3D. Earlier design tools only allowed creation in 2D, forcing researchers to map their creations into 3D. That meant designers couldn't make their devices too complex.
The software also allows designers to build DNA structures "bottom up" or "top down."
In "bottom up" design, researchers take individual strands of DNA and decide how to organize them into the structure they want, which allows fine control over local device structure and properties.
But they can also take a "top down" approach where they decide how their overall device needs to be shaped geometrically and then automate how the DNA strands are put together.
Combining the two allows for increasing complexity of the overall geometry while maintaining precise control over individual component properties, Castro said.
Another key element of the software is that it allows simulations of how designed DNA devices would move and operate in the real world.
"As you make these structures more complex, it is difficult to predict exactly what they are going to look like and how they are going to behave," Castro said.
"It is critical to be able to simulate how our devices will actually operate. Otherwise, we waste a lot of time."
As a demonstration of the software's ability, co-author Anjelica Kucinic, a doctoral student in chemical and biomolecular engineering at Ohio State, led the researchers in making and characterizing many nanostructures designed by the software.
Some of the devices they created included robot arms with claws that can pick up smaller items, and a hundred nanometer-sized structure that looks like an airplane (The "airplane" is 1000 times smaller than the width of a human hair).
The ability to make more complex nanodevices means that they can do more useful things and even carry out multiple tasks with one device, Castro said.
For example, it is one thing to have a DNA robot that, after injection into the bloodstream, can detect a certain pathogen.
"But a more complex device may not only detect that something bad is happening, but can also react by releasing a drug or capturing the pathogen," he said.
"We want to be able to design robots that respond in a particular way to a stimulus or move in a certain way."
Castro said he expects that for the next few years, the MagicDNA software will be used at universities and other research labs. But its use could expand in the future.
"There is getting to be more and more commercial interest in DNA nanotechnology," he said. "I think in the next five to 10 years we will start seeing commercial applications of DNA nanodevices and we are optimistic that this software can help drive that."
Joshua Johnson, who received his PhD at Ohio State in biophysics, was also a co-author of the paper.
The research was supported by grants from the National Science Foundation . | Software developed by researchers at Ohio State University can help combine tiny DNA strands into robots that potentially could be used to deliver drugs inside the body, detect deadly pathogens, or develop even smaller electronics. The MagicDNA software can develop nanodevices in just minutes, compared to several days when done manually. It also can create more complex nanodevices with up to 20 components that are easier to control, compared with about six components connected with joints and hinges built using traditional processes. The software allows for an entirely three-dimensional design process, with researchers able to build DNA structures "bottom up," in which they decide how to organize individual DNA strands into the desired structure, or "top down," in which they determine the device shape and then automate the organization of the DNA strands. The software also simulates real-world movement and operation of the devices. | [] | [] | [] | scitechnews | None | None | None | None | Software developed by researchers at Ohio State University can help combine tiny DNA strands into robots that potentially could be used to deliver drugs inside the body, detect deadly pathogens, or develop even smaller electronics. The MagicDNA software can develop nanodevices in just minutes, compared to several days when done manually. It also can create more complex nanodevices with up to 20 components that are easier to control, compared with about six components connected with joints and hinges built using traditional processes. The software allows for an entirely three-dimensional design process, with researchers able to build DNA structures "bottom up," in which they decide how to organize individual DNA strands into the desired structure, or "top down," in which they determine the device shape and then automate the organization of the DNA strands. The software also simulates real-world movement and operation of the devices.
Someday, scientists believe, tiny DNA-based robots and other nanodevices will deliver medicine inside our bodies, detect the presence of deadly pathogens, and help manufacture increasingly smaller electronics.
Researchers took a big step toward that future by developing a new tool that can design much more complex DNA robots and nanodevices than were ever possible before in a fraction of the time.
In a paper published today (April 19, 2021) in the journal Nature Materials , researchers from The Ohio State University - led by former engineering doctoral student Chao-Min Huang - unveiled new software they call MagicDNA.
The software helps researchers design ways to take tiny strands of DNA and combine them into complex structures with parts like rotors and hinges that can move and complete a variety of tasks, including drug delivery.
Researchers have been doing this for a number of years with slower tools with tedious manual steps, said Carlos Castro , co-author of the study and associate professor of mechanical and aerospace engineering at Ohio State .
"But now, nanodevices that may have taken us several days to design before now take us just a few minutes," Castro said.
And now researchers can make much more complex - and useful - nanodevices.
"Previously, we could build devices with up to about six individual components and connect them with joints and hinges and try to make them execute complex motions," said study co-author Hai-Jun Su , professor of mechanical and aerospace engineering at Ohio State.
"With this software, it is not hard to make robots or other devices with upwards of 20 components that are much easier to control. It is a huge step in our ability to design nanodevices that can perform the complex actions that we want them to do."
The software has a variety of advantages that will help scientists design better, more helpful nanodevices and - researchers hope - shorten the time before they are in everyday use.
One advantage is that it allows researchers to carry out the entire design truly in 3D. Earlier design tools only allowed creation in 2D, forcing researchers to map their creations into 3D. That meant designers couldn't make their devices too complex.
The software also allows designers to build DNA structures "bottom up" or "top down."
In "bottom up" design, researchers take individual strands of DNA and decide how to organize them into the structure they want, which allows fine control over local device structure and properties.
But they can also take a "top down" approach where they decide how their overall device needs to be shaped geometrically and then automate how the DNA strands are put together.
Combining the two allows for increasing complexity of the overall geometry while maintaining precise control over individual component properties, Castro said.
Another key element of the software is that it allows simulations of how designed DNA devices would move and operate in the real world.
"As you make these structures more complex, it is difficult to predict exactly what they are going to look like and how they are going to behave," Castro said.
"It is critical to be able to simulate how our devices will actually operate. Otherwise, we waste a lot of time."
As a demonstration of the software's ability, co-author Anjelica Kucinic, a doctoral student in chemical and biomolecular engineering at Ohio State, led the researchers in making and characterizing many nanostructures designed by the software.
Some of the devices they created included robot arms with claws that can pick up smaller items, and a hundred nanometer-sized structure that looks like an airplane (The "airplane" is 1000 times smaller than the width of a human hair).
The ability to make more complex nanodevices means that they can do more useful things and even carry out multiple tasks with one device, Castro said.
For example, it is one thing to have a DNA robot that, after injection into the bloodstream, can detect a certain pathogen.
"But a more complex device may not only detect that something bad is happening, but can also react by releasing a drug or capturing the pathogen," he said.
"We want to be able to design robots that respond in a particular way to a stimulus or move in a certain way."
Castro said he expects that for the next few years, the MagicDNA software will be used at universities and other research labs. But its use could expand in the future.
"There is getting to be more and more commercial interest in DNA nanotechnology," he said. "I think in the next five to 10 years we will start seeing commercial applications of DNA nanodevices and we are optimistic that this software can help drive that."
Joshua Johnson, who received his PhD at Ohio State in biophysics, was also a co-author of the paper.
The research was supported by grants from the National Science Foundation . |
|||
548 | A Tool for Navigating Complex Computer Instructions | We've come a long way since Intel introduced the first microprocessor in 1971. Their 4004 held 2,300 transistors, with today's best chips exceeding billions, harnessing more and more power since their birth.
But every time Intel releases a new computer chip, it's a costly investment, as they need to add new instructions of computer programs that tell it which data to process and how to process it . These are things a user doesn't see, but that power tasks like image processing, machine learning, and video coding.
However, the programs that process this new information, called compilers, can't always use these more complex instructions. The burden then often falls on expert developers to do more of the work by hand, and to perform error-prone and cumbersome tasks like writing assembly code.
Scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with a way to better navigate the complexity of supporting these instructions. Their tool "VeGen" (pronounced "vegan") automatically generates compiler plugins to effectively use more complicated instructions.
CSAIL PhD student Yishen Chen says that VeGen takes in the same documentation that Intel gives to software developers, and automatically generates a compiler plugin that lets the compiler exploit something called non-Single Instruction Multiple Data (SIMD), which are instructions that are more complicated to accelerate a given user-supplied program.
"Without VeGen, compiler engineers have to read the documentation and manually modify the compiler to use these instructions," says Chen, an author on a new paper about VeGen. "The problems here are that this is still manual, and current compiler algorithms are not effective at using these instructions."
Instruction methods
Most processors use math-based instructions that allow you to do something like "A= B+C."
Processors also support something called vector instructions, which are instructions that do multiple but identical operations at once, such as "A1=B1+C1 and A2=B2+C2." These are both considered more traditional "SIMD" instructions.
"Non-SIMD" instructions are more complicated, but even more powerful and efficient, such as instructions that perform both additions and subtractions simultaneously. Chen says that VeGen is mostly motivated by instructions that don't fit the SIMD model, in one way or another.
Think of the whole process like a restaurant:
If the sous chef and his team don't know how to use the new equipment, the restaurant owners who spend all the money remodeling the kitchen will not be happy.
"With the advent of complex instructions, it's become hard for compiler developers to keep code generation strategies up-to-date in order to harness the full potential supported by the underlying hardware," says Charith Mendis, professor at the University of Illinois at Urbana-Champaign, an author on a paper about the tool. "VeGen's approach to building code generator generators alleviates this burden by automatically generating parts of the compiler responsible for identifying code sequences that can exploit new hardware features. We hope that VeGen's approach to building compiler components will lead to more sustainable and maintainable compiler infrastructures in the future."
Initial results showed that, for example, on select video coding kernels, VeGen could automatically use non-SIMD vector instructions and get speedup from 27 percent to 300 percent.
"Putting all the Intel instruction manuals together is more than one foot wide, going into thousands of pages," says MIT professor Saman Amarasinghe, an author on the paper about VeGen. "Normally, the compiler writer has to pour over the fine details of instruction changes, spread over hundreds of pages, but VeGen totally bypasses the tedious work."
"As hardware becomes more complicated to accelerate compute-intensive domains, we believe VenGen is a valuable contribution," says Chen. "The long-term goal is that, whenever you add new features on your hardware, we can automatically figure out a way --without having to rewrite your code -- to use those hardware accelerators."
Chen wrote the paper alongside Mendis, and MIT professors Michael Carbin and Saman Amarasinghe. They will present the paper virtually at the Architectural Support for Programming Languages and Operating Systems (ASPLOS) conference in April. | A new tool developed by researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Illinois at Urbana-Champaign automatically generates compiler plugins that can handle more complex instructions. The tool, VeGen, could help eliminate the need for software developers to manually write assembly code for new Intel computer chips. The compiler plugins generated by VeGen allow for the exploitation of non-Single Instruction Multiple Data (SIMD), which allows multiple operations, like addition and subtraction, to be performed simultaneously. CSAIL's Yishen Chen said, "The long-term goal is that, whenever you add new features on your hardware, we can automatically figure out a way - without having to rewrite your code - to use those hardware accelerators." | [] | [] | [] | scitechnews | None | None | None | None | A new tool developed by researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Illinois at Urbana-Champaign automatically generates compiler plugins that can handle more complex instructions. The tool, VeGen, could help eliminate the need for software developers to manually write assembly code for new Intel computer chips. The compiler plugins generated by VeGen allow for the exploitation of non-Single Instruction Multiple Data (SIMD), which allows multiple operations, like addition and subtraction, to be performed simultaneously. CSAIL's Yishen Chen said, "The long-term goal is that, whenever you add new features on your hardware, we can automatically figure out a way - without having to rewrite your code - to use those hardware accelerators."
We've come a long way since Intel introduced the first microprocessor in 1971. Their 4004 held 2,300 transistors, with today's best chips exceeding billions, harnessing more and more power since their birth.
But every time Intel releases a new computer chip, it's a costly investment, as they need to add new instructions of computer programs that tell it which data to process and how to process it . These are things a user doesn't see, but that power tasks like image processing, machine learning, and video coding.
However, the programs that process this new information, called compilers, can't always use these more complex instructions. The burden then often falls on expert developers to do more of the work by hand, and to perform error-prone and cumbersome tasks like writing assembly code.
Scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with a way to better navigate the complexity of supporting these instructions. Their tool "VeGen" (pronounced "vegan") automatically generates compiler plugins to effectively use more complicated instructions.
CSAIL PhD student Yishen Chen says that VeGen takes in the same documentation that Intel gives to software developers, and automatically generates a compiler plugin that lets the compiler exploit something called non-Single Instruction Multiple Data (SIMD), which are instructions that are more complicated to accelerate a given user-supplied program.
"Without VeGen, compiler engineers have to read the documentation and manually modify the compiler to use these instructions," says Chen, an author on a new paper about VeGen. "The problems here are that this is still manual, and current compiler algorithms are not effective at using these instructions."
Instruction methods
Most processors use math-based instructions that allow you to do something like "A= B+C."
Processors also support something called vector instructions, which are instructions that do multiple but identical operations at once, such as "A1=B1+C1 and A2=B2+C2." These are both considered more traditional "SIMD" instructions.
"Non-SIMD" instructions are more complicated, but even more powerful and efficient, such as instructions that perform both additions and subtractions simultaneously. Chen says that VeGen is mostly motivated by instructions that don't fit the SIMD model, in one way or another.
Think of the whole process like a restaurant:
If the sous chef and his team don't know how to use the new equipment, the restaurant owners who spend all the money remodeling the kitchen will not be happy.
"With the advent of complex instructions, it's become hard for compiler developers to keep code generation strategies up-to-date in order to harness the full potential supported by the underlying hardware," says Charith Mendis, professor at the University of Illinois at Urbana-Champaign, an author on a paper about the tool. "VeGen's approach to building code generator generators alleviates this burden by automatically generating parts of the compiler responsible for identifying code sequences that can exploit new hardware features. We hope that VeGen's approach to building compiler components will lead to more sustainable and maintainable compiler infrastructures in the future."
Initial results showed that, for example, on select video coding kernels, VeGen could automatically use non-SIMD vector instructions and get speedup from 27 percent to 300 percent.
"Putting all the Intel instruction manuals together is more than one foot wide, going into thousands of pages," says MIT professor Saman Amarasinghe, an author on the paper about VeGen. "Normally, the compiler writer has to pour over the fine details of instruction changes, spread over hundreds of pages, but VeGen totally bypasses the tedious work."
"As hardware becomes more complicated to accelerate compute-intensive domains, we believe VenGen is a valuable contribution," says Chen. "The long-term goal is that, whenever you add new features on your hardware, we can automatically figure out a way --without having to rewrite your code -- to use those hardware accelerators."
Chen wrote the paper alongside Mendis, and MIT professors Michael Carbin and Saman Amarasinghe. They will present the paper virtually at the Architectural Support for Programming Languages and Operating Systems (ASPLOS) conference in April. |
|||
549 | Algorithm Uses Online Learning for Massive Cell Datasets | The fact that the human body is made up of cells is a basic, well-understood concept. Yet amazingly, scientists are still trying to determine the various types of cells that make up our organs and contribute to our health.
A relatively recent technique called single-cell sequencing is enabling researchers to recognize and categorize cell types by characteristics such as which genes they express. But this type of research generates enormous amounts of data, with datasets of hundreds of thousands to millions of cells.
A new algorithm developed by Joshua Welch, Ph.D ., of the Department of Computational Medicine and Bioinformatics, Ph.D. candidate Chao Gao and their team uses online learning, greatly speeding up this process and providing a way for researchers world-wide to analyze large data sets using the amount of memory found on a standard laptop computer. The findings are described in the journal Nature Biotechnology .
MORE FROM THE LAB: Subscribe to our weekly newsletter
"Our technique allows anyone with a computer to perform analyses at the scale of an entire organism," says Welch. "That's really what the field is moving towards."
The team demonstrated their proof of principle using data sets from the National Institute of Health's Brain Initiative , a project aimed at understanding the human brain by mapping every cell, with investigative teams throughout the country, including Welch's lab.
Typically, explains Welch, for projects like this one, each single-cell data set that is submitted must be re-analyzed with the previous data sets in the order they arrive. Their new approach allows new datasets to the be added to existing ones, without reprocessing the older datasets. It also enables researchers to break up datasets into so-called mini-batches to reduce the amount of memory needed to process them.
"This is crucial for the sets increasingly generated with millions of cells," Welch says. "This year, there have been five to six papers with two million cells or more and the amount of memory you need just to store the raw data is significantly more than anyone has on their computer."
Welch likens the online technique to the continuous data processing done by social media platforms like Facebook and Twitter, which must process continuously-generated data from users and serve up relevant posts to people's feeds. "Here, instead of people writing tweets, we have labs around the world performing experiments and releasing their data."
Like Podcasts? Add the Michigan Medicine News Break on iTunes , Google Podcast or anywhere you listen to podcasts.
The finding has the potential to greatly improve efficiency for other ambitious projects like the Human Body Map and Human Cell Atlas . Says Welch, "Understanding the normal compliment of cells in the body is the first step towards understanding how they go wrong in disease."
Paper cited: "Iterative single-cell multi-omic integration using online learning," Nature Biotechnology . DOI: 10.1038/s41587-021-00867-x | An algorithm developed by University of Michigan (U of M) researchers employs online learning to accelerate the analysis of enormous cell datasets, using the amount of memory found on a standard laptop computer. The algorithm enables new datasets to be added to existing ones without reprocessing the older datasets, and allows researchers to segment datasets into mini-batches so less memory is required for processing. U of M's Joshua Welch said, "Our technique allows anyone with a computer to perform analyses at the scale of an entire organism. That's really what the field is moving towards." | [] | [] | [] | scitechnews | None | None | None | None | An algorithm developed by University of Michigan (U of M) researchers employs online learning to accelerate the analysis of enormous cell datasets, using the amount of memory found on a standard laptop computer. The algorithm enables new datasets to be added to existing ones without reprocessing the older datasets, and allows researchers to segment datasets into mini-batches so less memory is required for processing. U of M's Joshua Welch said, "Our technique allows anyone with a computer to perform analyses at the scale of an entire organism. That's really what the field is moving towards."
The fact that the human body is made up of cells is a basic, well-understood concept. Yet amazingly, scientists are still trying to determine the various types of cells that make up our organs and contribute to our health.
A relatively recent technique called single-cell sequencing is enabling researchers to recognize and categorize cell types by characteristics such as which genes they express. But this type of research generates enormous amounts of data, with datasets of hundreds of thousands to millions of cells.
A new algorithm developed by Joshua Welch, Ph.D ., of the Department of Computational Medicine and Bioinformatics, Ph.D. candidate Chao Gao and their team uses online learning, greatly speeding up this process and providing a way for researchers world-wide to analyze large data sets using the amount of memory found on a standard laptop computer. The findings are described in the journal Nature Biotechnology .
MORE FROM THE LAB: Subscribe to our weekly newsletter
"Our technique allows anyone with a computer to perform analyses at the scale of an entire organism," says Welch. "That's really what the field is moving towards."
The team demonstrated their proof of principle using data sets from the National Institute of Health's Brain Initiative , a project aimed at understanding the human brain by mapping every cell, with investigative teams throughout the country, including Welch's lab.
Typically, explains Welch, for projects like this one, each single-cell data set that is submitted must be re-analyzed with the previous data sets in the order they arrive. Their new approach allows new datasets to the be added to existing ones, without reprocessing the older datasets. It also enables researchers to break up datasets into so-called mini-batches to reduce the amount of memory needed to process them.
"This is crucial for the sets increasingly generated with millions of cells," Welch says. "This year, there have been five to six papers with two million cells or more and the amount of memory you need just to store the raw data is significantly more than anyone has on their computer."
Welch likens the online technique to the continuous data processing done by social media platforms like Facebook and Twitter, which must process continuously-generated data from users and serve up relevant posts to people's feeds. "Here, instead of people writing tweets, we have labs around the world performing experiments and releasing their data."
Like Podcasts? Add the Michigan Medicine News Break on iTunes , Google Podcast or anywhere you listen to podcasts.
The finding has the potential to greatly improve efficiency for other ambitious projects like the Human Body Map and Human Cell Atlas . Says Welch, "Understanding the normal compliment of cells in the body is the first step towards understanding how they go wrong in disease."
Paper cited: "Iterative single-cell multi-omic integration using online learning," Nature Biotechnology . DOI: 10.1038/s41587-021-00867-x |
|||
550 | Adobe Co-Founder Charles Geschke, Pioneer of Desktop Publishing and PDFs, Dies at Age 81 | Charles Geschke, who studied Latin and liberal arts as an undergraduate and once considered the priesthood, discovered computer programming more or less by accident in the 1960s.
That led to a job at Xerox Corp.'s research arm in Silicon Valley, where he bonded with a colleague, John Warnock. They worked on software that eventually would translate words and images on a computer screen into printed documents. | ACM Fellow Charles Geschke, co-founder of software giant Adobe, has died at the age of 81. Geschke formed Adobe with John Warnock, a fellow researcher at Xerox's Palo Alto Research Center, based on their work on software that eventually would translate words and images on a computer screen into printed documents. Adobe software gave birth to desktop publishing with programs like Photoshop, Acrobat, and Illustrator, as well as Portable Document Format (PDF) technology. Geschke and Warnock launched Adobe with the help of venture capital firm Hambrecht & Quist, and Apple Computer was an early client, using the company's PostScript language to drive its LaserWriter printers. | [] | [] | [] | scitechnews | None | None | None | None | ACM Fellow Charles Geschke, co-founder of software giant Adobe, has died at the age of 81. Geschke formed Adobe with John Warnock, a fellow researcher at Xerox's Palo Alto Research Center, based on their work on software that eventually would translate words and images on a computer screen into printed documents. Adobe software gave birth to desktop publishing with programs like Photoshop, Acrobat, and Illustrator, as well as Portable Document Format (PDF) technology. Geschke and Warnock launched Adobe with the help of venture capital firm Hambrecht & Quist, and Apple Computer was an early client, using the company's PostScript language to drive its LaserWriter printers.
Charles Geschke, who studied Latin and liberal arts as an undergraduate and once considered the priesthood, discovered computer programming more or less by accident in the 1960s.
That led to a job at Xerox Corp.'s research arm in Silicon Valley, where he bonded with a colleague, John Warnock. They worked on software that eventually would translate words and images on a computer screen into printed documents. |
|||
551 | NASA's Ingenuity Helicopter Makes Historic First Flight on Mars | The U.S. National Aeronautics and Space Administration (NASA) 's Ingenuity helicopter has become the first aircraft to achieve powered flight on Mars. Ingenuity accompanied the Perseverance rover, which landed on Mars in February; it operates autonomously based on preprogrammed instructions and uses its cameras and sensors to navigate. For its first Martian flight, the aircraft ascended, hovered for about 40 seconds, and returned to its landing spot. Ingenuity will perform up to four flights roughly every three Martian days during the next month; its next two flights will take the helicopter up to five meters (16.4 feet) above the surface and moving up to 15 meters (49 feet) forward and back to the landing area. NASA's David Flannery suggested Ingenuity's flights could lead to the design more resilient drones, as well as informing the evolution of drones on Earth. | [] | [] | [] | scitechnews | None | None | None | None | The U.S. National Aeronautics and Space Administration (NASA) 's Ingenuity helicopter has become the first aircraft to achieve powered flight on Mars. Ingenuity accompanied the Perseverance rover, which landed on Mars in February; it operates autonomously based on preprogrammed instructions and uses its cameras and sensors to navigate. For its first Martian flight, the aircraft ascended, hovered for about 40 seconds, and returned to its landing spot. Ingenuity will perform up to four flights roughly every three Martian days during the next month; its next two flights will take the helicopter up to five meters (16.4 feet) above the surface and moving up to 15 meters (49 feet) forward and back to the landing area. NASA's David Flannery suggested Ingenuity's flights could lead to the design more resilient drones, as well as informing the evolution of drones on Earth.
|
||||
552 | Robotic Elephant Trunk Can Learn Tasks on Its Own | Researchers at Germany's University of Tubingen used three-dimensional (3D) printing to create a robotic elephant trunk that utilizes artificial intelligence (AI) to emulate how sensory input triggers synaptic chain reactions in organic brains. Tubingen's Sebastian Otte and colleagues assembled the device from modules with gear-driving motors that tilt up to 40 degrees in two axes. The AI was trained on examples of the motor inputs required to move the trunk in certain ways, and tests showed it could direct the tip of the trunk within less than a centimeter from a target. The robot is a proof of concept of a spiking neural network algorithm, which works like an actual brain in which certain inputs cause a chain reaction of firing synapses, while requiring orders of magnitude less computational power and energy. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Germany's University of Tubingen used three-dimensional (3D) printing to create a robotic elephant trunk that utilizes artificial intelligence (AI) to emulate how sensory input triggers synaptic chain reactions in organic brains. Tubingen's Sebastian Otte and colleagues assembled the device from modules with gear-driving motors that tilt up to 40 degrees in two axes. The AI was trained on examples of the motor inputs required to move the trunk in certain ways, and tests showed it could direct the tip of the trunk within less than a centimeter from a target. The robot is a proof of concept of a spiking neural network algorithm, which works like an actual brain in which certain inputs cause a chain reaction of firing synapses, while requiring orders of magnitude less computational power and energy.
|
||||
553 | Novel Use of 3D Geoinformation to Identify Urban Farming Sites | Predicting sunlight conditions to determine farming sites and suitable crops
Led by Dr Filip Biljecki, presidential young professor at NUS Design and Environment , the study investigates the possibility of using three-dimensional (3D) city models and urban digital twins to assess the suitability of farming locations in high-rise buildings in terms of sunlight availability.
Titled " 3D city models for urban farming site identification in buildings ," their research paper was published in the journal Computers, Environment and Urban Systems , based on a proof of concept focused on a residential building situated at Jurong West in Singapore. Field surveys were carried out to validate the simulation figures.
"We investigate whether vertical spaces of buildings comprising outdoor corridors, façades and windows receive sufficient photosynthetically active radiation (PAR) for growing food crops and do so at a high resolution, obtaining insights for hundreds of locations in a particular building," shared the paper's first author Mr Ankit Palliwal, who graduated from the NUS Geography with a Master of Science in Applied GIS.
PAR is defined as the portion of solar spectrum in the 400 to 700 nm wavelength range, which is utilised by plants for photosynthesis. Its amount is a key factor to understand whether a location has the potential for farming and what kind of crops can be grown at a specific site because different crops require different PAR conditions for its optimal growth.
"We conducted field measurements to verify the veracity of the simulations and concluded that 3D city models are a viable instrument for calculating the potential of spaces in buildings for urban farming, potentially replacing field surveys and doing so more efficiently. We were able to understand the farming conditions for each locality in a specific building without visiting it, and to decide which crops are suitable to be grown. For this particular building, we have identified locations that would be suitable for growing lettuce and sweet pepper. This research is the first instance in which 3D geoinformation has been used for this purpose, thus, we invented a new application of such data, which is becoming increasingly important in the context of smart cities," shared Dr Biljecki, the principal investigator of the study. | A study by researchers at the National University of Singapore looked at whether three-dimensional (3D) city models and urban digital twins could help identify high-rise buildings suitable for urban farming based on sunlight availability. The researchers assessed whether outdoor corridors, façades, and windows receive enough photosynthetically active radiation (PAR) to grow crops, and which crops can be grown at a specific site based on PAR conditions. They validated the simulations through field surveys, and found such 3D city models may be more efficient than field surveys in assessing urban farming conditions, which they said eventually could be scaled to cover entire cities. | [] | [] | [] | scitechnews | None | None | None | None | A study by researchers at the National University of Singapore looked at whether three-dimensional (3D) city models and urban digital twins could help identify high-rise buildings suitable for urban farming based on sunlight availability. The researchers assessed whether outdoor corridors, façades, and windows receive enough photosynthetically active radiation (PAR) to grow crops, and which crops can be grown at a specific site based on PAR conditions. They validated the simulations through field surveys, and found such 3D city models may be more efficient than field surveys in assessing urban farming conditions, which they said eventually could be scaled to cover entire cities.
Predicting sunlight conditions to determine farming sites and suitable crops
Led by Dr Filip Biljecki, presidential young professor at NUS Design and Environment , the study investigates the possibility of using three-dimensional (3D) city models and urban digital twins to assess the suitability of farming locations in high-rise buildings in terms of sunlight availability.
Titled " 3D city models for urban farming site identification in buildings ," their research paper was published in the journal Computers, Environment and Urban Systems , based on a proof of concept focused on a residential building situated at Jurong West in Singapore. Field surveys were carried out to validate the simulation figures.
"We investigate whether vertical spaces of buildings comprising outdoor corridors, façades and windows receive sufficient photosynthetically active radiation (PAR) for growing food crops and do so at a high resolution, obtaining insights for hundreds of locations in a particular building," shared the paper's first author Mr Ankit Palliwal, who graduated from the NUS Geography with a Master of Science in Applied GIS.
PAR is defined as the portion of solar spectrum in the 400 to 700 nm wavelength range, which is utilised by plants for photosynthesis. Its amount is a key factor to understand whether a location has the potential for farming and what kind of crops can be grown at a specific site because different crops require different PAR conditions for its optimal growth.
"We conducted field measurements to verify the veracity of the simulations and concluded that 3D city models are a viable instrument for calculating the potential of spaces in buildings for urban farming, potentially replacing field surveys and doing so more efficiently. We were able to understand the farming conditions for each locality in a specific building without visiting it, and to decide which crops are suitable to be grown. For this particular building, we have identified locations that would be suitable for growing lettuce and sweet pepper. This research is the first instance in which 3D geoinformation has been used for this purpose, thus, we invented a new application of such data, which is becoming increasingly important in the context of smart cities," shared Dr Biljecki, the principal investigator of the study. |
|||
555 | SMU's ChemGen Completes Essential Drug Discovery Work in Days | DALLAS ( SMU ) - SMU researchers have developed a set of computer-driven routines that can mimic chemical reactions in a lab, cutting the time and labor-related expense frequently required to find the best possible drug for a desired outcome.
The University has a patent pending for the computational routines under the name ChemGen . In addition to speeding the process of finding successful drugs for specific applications, ChemGen will allow smaller labs to contribute to meaningful research at a level many cannot currently afford.
"ChemGen has the ability to replace a team of 20 highly-skilled organic chemists in the optimization of a molecule of interest," said lead inventor John Wise , an SMU professor who specializes in structural biochemistry. "We're basically arming an army of smaller labs to do really sophisticated research.
"I would also hope that major drug companies take advantage of this technology, too," Wise said. ChemGen could potentially empower a building full of skilled chemists to dramatically increase their productivity from working on as few as six problems a year to as many as 60, he said.
"That's going to make new drugs come out faster and cheaper, which is exactly what we need for the coronavirus and whatever comes next," Wise said.
Currently, it can take 12 to 15 years for a new drug to work its way through the design, development, testing and approval process for use in patients. And while the mean cost of drug development to manufacturers is the subject of debate, estimates place that cost as high as $2.6 billion.
How it works
ChemGen speeds up an early part of the drug discovery process known as pharmacological optimization - making the drug functional and effective for specific applications - a task that can take months for a team of organic chemists to do. ChemGen can do the same tasks virtually in a few days using high performance computers like SMU's mammoth ManeFrame II .
Wise explains that the first step in creating a drug is to identify a molecular target that the drug can act on - a target that plays a role in allowing a person to be infected by a virus, to feel symptoms of a disease or to suffer other harm to the body. Once that target is identified, the next step is to find as many chemical keys as possible that can potentially block the target's function and prevent the negative biological effects that cause illness and disease. Both the molecular targets and the chemical keys that act upon them tend to be extremely complex molecules, responsible for a number of tasks in the human body."They are like people," Wise said. "They're all different."
"When a drug company finds a drug hit - a chemical 'key' that they think could be valuable - they might have a team of very skilled chemists work on that one targeted molecule. That's not the only molecule they'll work with, but they might spend three months of the next year making 1,000 variations of that one molecule," Wise said.
This is the traditional approach to pharmacological optimization - chemists trying to determine if there's a better match to the target protein than the one they just found. The reason that matters is that if a drug doesn't fit the protein perfectly, it won't bond tightly enough with that protein to be effective. Researchers also need to identify what other proteins in the human body might unintentionally be blocked by that same key, possibly causing side effects.
ChemGen creates molecular variants of the original chemical key computationally instead of in a physical chemistry laboratory. It mimics what would happen under various combinations of circumstances.
"We taught ChemGen the rules of chemistry for these reactions - what can be done and what can't be done," said Wise, associate professor in the SMU Department of Biological Sciences."We can take a thousand compounds, react them in the computer, and make 1,000 products from that. Then we can take that group of 1,000 and react them with a second group of 1,000 other molecules to create a million different, but related products. This generates an enormous quantity of chemical variance for a given molecule."
As a result, ChemGen can look at those variants and determine if any of them are a better match for the targeted protein than the original key.
"The process is blind. There's no bias. It generates these variants, and then just says, 'How well do you fit, and it ranks that," Wise said. "So a research group or pharmaceutical company need only actually synthesize the molecules with the best chances of being improved, leaving the thousands of unimproved molecules in the computer and not on the lab bench.
"This approach is very efficient in both time and money," Wise said. "It limits waste and makes it more likely that the new drug will be better than what was originally discovered."
Several SMU scientists contributed to making ChemGen a reality
Wise has been working for more than a decade with other SMU scientists, including students, to develop what became ChemGen.
Wise got the idea to create ChemGen while he and Pia Vogel were trying to find compounds that can reverse chemotherapy failure in aggressive cancers. Vogel is a professor and director of SMU's Center for Drug Discovery, Design and Delivery .
Alex Lippert , an associate professor in chemistry, helped Wise program ChemGen to know what it could and couldn't do in a chemical reaction. Lippert and his PhD student Maha Aljowni also physically synthesized the drug compounds predicted by ChemGen and showed that it accurately predicted new molecules that could be active in multi-drug resistance cancer.
Robert Kalescky took the scripts Wise wrote and converted them to a different programming language, so that ChemGen works faster and can be used by anyone. Kalescky is SMU's HPC Applications Scientist, who assists the research community at SMU with their use of ManeFrame II.
Amila K. Nanayakkara , Mike Chen , Maisa Correa de Oliveira and Lauren Ammerman - all of whom were or are students in the Biological Sciences Ph.D. program at SMU - also helped test it. Ketetha Olengue also assisted in the early research when she was an undergraduate at SMU.
About SMU
SMU is the nationally ranked global research university in the dynamic city of Dallas. SMU's alumni, faculty and nearly 12,000 students in eight degree-granting schools demonstrate an entrepreneurial spirit as they lead change in their professions, communities and the world. | Southern Methodist University (SMU) researchers have developed ChemGen, a set of computer-driven routines that emulate chemical reactions in a laboratory, significantly reducing the time and costs of drug discovery. ChemGen accelerates pharmacological optimization from months to days, using high-performance computers like SMU's ManeFrame II shared high-performance computing cluster. The tool computationally generates molecular variants of the original chemical key, mimicking reactions under various combinations of circumstances. SMU's John Wise said, "A research group or pharmaceutical company need only actually synthesize the molecules with the best chances of being improved, leaving the thousands of unimproved molecules in the computer and not on the lab bench." | [] | [] | [] | scitechnews | None | None | None | None | Southern Methodist University (SMU) researchers have developed ChemGen, a set of computer-driven routines that emulate chemical reactions in a laboratory, significantly reducing the time and costs of drug discovery. ChemGen accelerates pharmacological optimization from months to days, using high-performance computers like SMU's ManeFrame II shared high-performance computing cluster. The tool computationally generates molecular variants of the original chemical key, mimicking reactions under various combinations of circumstances. SMU's John Wise said, "A research group or pharmaceutical company need only actually synthesize the molecules with the best chances of being improved, leaving the thousands of unimproved molecules in the computer and not on the lab bench."
DALLAS ( SMU ) - SMU researchers have developed a set of computer-driven routines that can mimic chemical reactions in a lab, cutting the time and labor-related expense frequently required to find the best possible drug for a desired outcome.
The University has a patent pending for the computational routines under the name ChemGen . In addition to speeding the process of finding successful drugs for specific applications, ChemGen will allow smaller labs to contribute to meaningful research at a level many cannot currently afford.
"ChemGen has the ability to replace a team of 20 highly-skilled organic chemists in the optimization of a molecule of interest," said lead inventor John Wise , an SMU professor who specializes in structural biochemistry. "We're basically arming an army of smaller labs to do really sophisticated research.
"I would also hope that major drug companies take advantage of this technology, too," Wise said. ChemGen could potentially empower a building full of skilled chemists to dramatically increase their productivity from working on as few as six problems a year to as many as 60, he said.
"That's going to make new drugs come out faster and cheaper, which is exactly what we need for the coronavirus and whatever comes next," Wise said.
Currently, it can take 12 to 15 years for a new drug to work its way through the design, development, testing and approval process for use in patients. And while the mean cost of drug development to manufacturers is the subject of debate, estimates place that cost as high as $2.6 billion.
How it works
ChemGen speeds up an early part of the drug discovery process known as pharmacological optimization - making the drug functional and effective for specific applications - a task that can take months for a team of organic chemists to do. ChemGen can do the same tasks virtually in a few days using high performance computers like SMU's mammoth ManeFrame II .
Wise explains that the first step in creating a drug is to identify a molecular target that the drug can act on - a target that plays a role in allowing a person to be infected by a virus, to feel symptoms of a disease or to suffer other harm to the body. Once that target is identified, the next step is to find as many chemical keys as possible that can potentially block the target's function and prevent the negative biological effects that cause illness and disease. Both the molecular targets and the chemical keys that act upon them tend to be extremely complex molecules, responsible for a number of tasks in the human body."They are like people," Wise said. "They're all different."
"When a drug company finds a drug hit - a chemical 'key' that they think could be valuable - they might have a team of very skilled chemists work on that one targeted molecule. That's not the only molecule they'll work with, but they might spend three months of the next year making 1,000 variations of that one molecule," Wise said.
This is the traditional approach to pharmacological optimization - chemists trying to determine if there's a better match to the target protein than the one they just found. The reason that matters is that if a drug doesn't fit the protein perfectly, it won't bond tightly enough with that protein to be effective. Researchers also need to identify what other proteins in the human body might unintentionally be blocked by that same key, possibly causing side effects.
ChemGen creates molecular variants of the original chemical key computationally instead of in a physical chemistry laboratory. It mimics what would happen under various combinations of circumstances.
"We taught ChemGen the rules of chemistry for these reactions - what can be done and what can't be done," said Wise, associate professor in the SMU Department of Biological Sciences."We can take a thousand compounds, react them in the computer, and make 1,000 products from that. Then we can take that group of 1,000 and react them with a second group of 1,000 other molecules to create a million different, but related products. This generates an enormous quantity of chemical variance for a given molecule."
As a result, ChemGen can look at those variants and determine if any of them are a better match for the targeted protein than the original key.
"The process is blind. There's no bias. It generates these variants, and then just says, 'How well do you fit, and it ranks that," Wise said. "So a research group or pharmaceutical company need only actually synthesize the molecules with the best chances of being improved, leaving the thousands of unimproved molecules in the computer and not on the lab bench.
"This approach is very efficient in both time and money," Wise said. "It limits waste and makes it more likely that the new drug will be better than what was originally discovered."
Several SMU scientists contributed to making ChemGen a reality
Wise has been working for more than a decade with other SMU scientists, including students, to develop what became ChemGen.
Wise got the idea to create ChemGen while he and Pia Vogel were trying to find compounds that can reverse chemotherapy failure in aggressive cancers. Vogel is a professor and director of SMU's Center for Drug Discovery, Design and Delivery .
Alex Lippert , an associate professor in chemistry, helped Wise program ChemGen to know what it could and couldn't do in a chemical reaction. Lippert and his PhD student Maha Aljowni also physically synthesized the drug compounds predicted by ChemGen and showed that it accurately predicted new molecules that could be active in multi-drug resistance cancer.
Robert Kalescky took the scripts Wise wrote and converted them to a different programming language, so that ChemGen works faster and can be used by anyone. Kalescky is SMU's HPC Applications Scientist, who assists the research community at SMU with their use of ManeFrame II.
Amila K. Nanayakkara , Mike Chen , Maisa Correa de Oliveira and Lauren Ammerman - all of whom were or are students in the Biological Sciences Ph.D. program at SMU - also helped test it. Ketetha Olengue also assisted in the early research when she was an undergraduate at SMU.
About SMU
SMU is the nationally ranked global research university in the dynamic city of Dallas. SMU's alumni, faculty and nearly 12,000 students in eight degree-granting schools demonstrate an entrepreneurial spirit as they lead change in their professions, communities and the world. |
|||
556 | Finally, 3D-Printed Graphene Aerogels for Water Treatment | BUFFALO, N.Y. - Graphene excels at removing contaminants from water, but it's not yet a commercially viable use of the wonder material.
That could be changing.
In a recent study, University at Buffalo engineers report a new process of 3D printing graphene aerogels that they say overcomes two key hurdles - scalability and creating a version of the material that's stable enough for repeated use - for water treatment.
"The goal is to safely remove contaminants from water without releasing any problematic chemical residue," says study co-author Nirupam Aich, PhD, assistant professor of environmental engineering at the UB School of Engineering and Applied Sciences. "The aerogels we've created hold their structure when put in water treatment systems, and they can be applied in diverse water treatment applications."
The study - " 3D printed graphene-biopolymer aerogels for water contaminant removal: a proof of concept " - was published in the Emerging Investigator Series of the journal Environmental Science: Nano. Arvid Masud, PhD, a former student in Aich's lab, is the lead author; Chi Zhou, PhD, associate professor of industrial and systems engineering at UB, is a co-author. | University at Buffalo (UB) and University of Pittsburgh engineers have three-dimensionally (3D) -printed graphene aerogels for water treatment, after addressing scalability and stability issues. UB's Nirupam Aich said, "The aerogels we've created hold their structure when put in water treatment systems, and they can be applied in diverse water treatment applications." The researchers infused graphene-derived ink with bio-inspired polymers of polydopamine and bovine serum albumin protein; the augmented aerogels remove contaminants from the water, including heavy metals, organic dyes, and organic solvents. Aich said the aerogels can be printed in larger sizes, making them usable in large facilities like wastewater treatment plants; they also are reusable. | [] | [] | [] | scitechnews | None | None | None | None | University at Buffalo (UB) and University of Pittsburgh engineers have three-dimensionally (3D) -printed graphene aerogels for water treatment, after addressing scalability and stability issues. UB's Nirupam Aich said, "The aerogels we've created hold their structure when put in water treatment systems, and they can be applied in diverse water treatment applications." The researchers infused graphene-derived ink with bio-inspired polymers of polydopamine and bovine serum albumin protein; the augmented aerogels remove contaminants from the water, including heavy metals, organic dyes, and organic solvents. Aich said the aerogels can be printed in larger sizes, making them usable in large facilities like wastewater treatment plants; they also are reusable.
BUFFALO, N.Y. - Graphene excels at removing contaminants from water, but it's not yet a commercially viable use of the wonder material.
That could be changing.
In a recent study, University at Buffalo engineers report a new process of 3D printing graphene aerogels that they say overcomes two key hurdles - scalability and creating a version of the material that's stable enough for repeated use - for water treatment.
"The goal is to safely remove contaminants from water without releasing any problematic chemical residue," says study co-author Nirupam Aich, PhD, assistant professor of environmental engineering at the UB School of Engineering and Applied Sciences. "The aerogels we've created hold their structure when put in water treatment systems, and they can be applied in diverse water treatment applications."
The study - " 3D printed graphene-biopolymer aerogels for water contaminant removal: a proof of concept " - was published in the Emerging Investigator Series of the journal Environmental Science: Nano. Arvid Masud, PhD, a former student in Aich's lab, is the lead author; Chi Zhou, PhD, associate professor of industrial and systems engineering at UB, is a co-author. |
|||
557 | AWS Reveals Method to Build More Accurate Quantum Computer | Amazon's cloud subsidiary AWS has released its first research paper detailing a new architecture for a future quantum computer, which, if realized, could set a new standard for error correction.
The cloud company published a new blueprint for a fault-tolerant quantum computer that, although still purely theoretical, describes a new way of controlling quantum bits (or qubits) to ensure that they carry out calculations as accurately as possible.
The paper is likely to grab the attention of many experts who are working to improve quantum error correction (QEC), a field that's growing in parallel with quantum computing that seeks to resolve one of the key barriers standing in the way of realising useful, large-scale quantum computers.
Quantum systems, which are expected to generate breakthroughs in industries ranging from finance to drug discovery thanks to exponentially greater compute capabilities, are effectively still riddled with imperfections, or errors, that can spoil the results of calculations.
SEE: Hiring Kit: Computer Hardware Engineer (TechRepublic Premium)
The building blocks of quantum computers, qubits, exist in a special, quantum state: instead of representing either a one or a zero, like the bits found in classical devices, quantum bits can exist in both states at the same time. While this enables a quantum computer to carry out many calculations at once, qubits are also highly unstable, and at risk of collapsing from their quantum state as soon as they are exposed to the outside environment. Consequently, the calculations performed by qubits in quantum gates cannot always be relied upon -- and scientists are now exploring ways to discover when a qubit has made an error, and to correct the mistake.
"The quantum algorithms that are known to be useful -- those that are likely to have an overwhelming advantage over classical algorithms -- may require millions or billions of quantum gates. Unfortunately, quantum gates, the building blocks of quantum algorithms, are prone to errors," said AWS Center for Quantum Computing research scientists Patricio Arrangoiz-Arriola and Earl Campbell in a blog post .
"These error rates have decreased over time, but are still many orders of magnitude larger than what is needed to run high-fidelity algorithms. To reduce error rates further, researchers need to supplement approaches that lower gate error rates at the physical level with other methods such as QEC."
There are different ways to carry out quantum error correction. The conventional approach, known as active QEC, uses many imperfect qubits (called 'physical qubits') to correct one qubit that has been identified as faulty, to restore the particle to a state of precision. The controllable qubit created in this way is called a 'logical qubit'.
Active QEC, however, creates a large hardware overhead in that many physical qubits are required to encode every logical qubit, which makes it even harder to build a universal quantum computer comprising large-scale qubit circuits.
Another approach, passive QEC, focuses on engineering a physical computing system that has an inherent stability against errors. Although much of the work around passive QEC is still experimental, the method aims to create intrinsic fault-tolerance that could accelerate the construction of a quantum computer with a large number of qubits.
In the new blueprint, AWS's researchers combine both active and passive QEC to create a quantum computer that, in principle, could achieve higher levels of precision. The architecture presents a system based on 'cat states' -- a form of passive QEC where qubits are kept in a state of superposition within an oscillator, while pairs of photons are injected and extracted to ensure that the quantum state remains stable.
This design, according to the scientists, has been shown to reduce bit-flip error, which occurs when a qubit's state flips from one to zero or vice versa . But to further protect qubits from other types of error that might arise, the researchers propose coupling passive QEC with known active QEC techniques.
Repetition code, for example, is a well-established approach to detect and correct error in quantum devices, which Arrangoiz-Arriola and Campbell used together with cat states to improve fault tolerance in their theoretical quantum computer.
The results seem promising: the combination of cat states and repetition code produced an architecture in which just over 2,000 superconducting components used for stabilization could produce a hundred logical qubits capable of executing a thousand gates.
"This may fit in a single dilution refrigerator using current or near-term technology and would go far beyond what we can simulate on a classical computer," said Arrangoiz-Arriola and Campbell.
Before the theoretical architecture proposed by the researchers takes shape as a physical device, however, several challenges remain. For example, cat states have already been demonstrated in the lab in previous proof-of-concept experiments, but they are yet to be produced at a useful scale.
The paper nevertheless suggests that AWS is gearing up for quantum computing, as major tech players increasingly enter what appears to be a race for quantum.
IBM recently unveiled a roadmap that eyes a 1,121-qubit system for 2023, and is currently working on a 127-qubit processor. Google's 54-qubit Sycamore chip made headlines in 2019 for achieving quantum supremacy ; and Microsoft recently made its cloud-based quantum ecosystem, Azure Quantum, available for public preview .
Amazon, for its part, launched an AWS-managed service called Amazon Braket, which allows scientists, researchers and developers to experiment with computers from quantum hardware providers , such as D-Wave, IonQ and Rigetti. However, the company is yet to build its own quantum computer. | Amazon cloud subsidiary Amazon Web Services (AWS) has revealed a new architecture for a fault-tolerant quantum computer that could ensure quantum bits (qubits) execute calculations with maximum accuracy. The architecture presents a system based on "cat states" - a form of passive quantum error correction (QEC) where qubits are superpositioned within an oscillator, while photon pairs are injected and extracted to guarantee the quantum state's stability. This design can reduce bit-flip error, but the AWS team proposed coupling passive QEC with known active QEC methods for safeguards against other types of error. The resulting architecture enables just over 2,000 superconducting components used for stabilization to produce 100 logical qubits capable of executing 1,000 gates. | [] | [] | [] | scitechnews | None | None | None | None | Amazon cloud subsidiary Amazon Web Services (AWS) has revealed a new architecture for a fault-tolerant quantum computer that could ensure quantum bits (qubits) execute calculations with maximum accuracy. The architecture presents a system based on "cat states" - a form of passive quantum error correction (QEC) where qubits are superpositioned within an oscillator, while photon pairs are injected and extracted to guarantee the quantum state's stability. This design can reduce bit-flip error, but the AWS team proposed coupling passive QEC with known active QEC methods for safeguards against other types of error. The resulting architecture enables just over 2,000 superconducting components used for stabilization to produce 100 logical qubits capable of executing 1,000 gates.
Amazon's cloud subsidiary AWS has released its first research paper detailing a new architecture for a future quantum computer, which, if realized, could set a new standard for error correction.
The cloud company published a new blueprint for a fault-tolerant quantum computer that, although still purely theoretical, describes a new way of controlling quantum bits (or qubits) to ensure that they carry out calculations as accurately as possible.
The paper is likely to grab the attention of many experts who are working to improve quantum error correction (QEC), a field that's growing in parallel with quantum computing that seeks to resolve one of the key barriers standing in the way of realising useful, large-scale quantum computers.
Quantum systems, which are expected to generate breakthroughs in industries ranging from finance to drug discovery thanks to exponentially greater compute capabilities, are effectively still riddled with imperfections, or errors, that can spoil the results of calculations.
SEE: Hiring Kit: Computer Hardware Engineer (TechRepublic Premium)
The building blocks of quantum computers, qubits, exist in a special, quantum state: instead of representing either a one or a zero, like the bits found in classical devices, quantum bits can exist in both states at the same time. While this enables a quantum computer to carry out many calculations at once, qubits are also highly unstable, and at risk of collapsing from their quantum state as soon as they are exposed to the outside environment. Consequently, the calculations performed by qubits in quantum gates cannot always be relied upon -- and scientists are now exploring ways to discover when a qubit has made an error, and to correct the mistake.
"The quantum algorithms that are known to be useful -- those that are likely to have an overwhelming advantage over classical algorithms -- may require millions or billions of quantum gates. Unfortunately, quantum gates, the building blocks of quantum algorithms, are prone to errors," said AWS Center for Quantum Computing research scientists Patricio Arrangoiz-Arriola and Earl Campbell in a blog post .
"These error rates have decreased over time, but are still many orders of magnitude larger than what is needed to run high-fidelity algorithms. To reduce error rates further, researchers need to supplement approaches that lower gate error rates at the physical level with other methods such as QEC."
There are different ways to carry out quantum error correction. The conventional approach, known as active QEC, uses many imperfect qubits (called 'physical qubits') to correct one qubit that has been identified as faulty, to restore the particle to a state of precision. The controllable qubit created in this way is called a 'logical qubit'.
Active QEC, however, creates a large hardware overhead in that many physical qubits are required to encode every logical qubit, which makes it even harder to build a universal quantum computer comprising large-scale qubit circuits.
Another approach, passive QEC, focuses on engineering a physical computing system that has an inherent stability against errors. Although much of the work around passive QEC is still experimental, the method aims to create intrinsic fault-tolerance that could accelerate the construction of a quantum computer with a large number of qubits.
In the new blueprint, AWS's researchers combine both active and passive QEC to create a quantum computer that, in principle, could achieve higher levels of precision. The architecture presents a system based on 'cat states' -- a form of passive QEC where qubits are kept in a state of superposition within an oscillator, while pairs of photons are injected and extracted to ensure that the quantum state remains stable.
This design, according to the scientists, has been shown to reduce bit-flip error, which occurs when a qubit's state flips from one to zero or vice versa . But to further protect qubits from other types of error that might arise, the researchers propose coupling passive QEC with known active QEC techniques.
Repetition code, for example, is a well-established approach to detect and correct error in quantum devices, which Arrangoiz-Arriola and Campbell used together with cat states to improve fault tolerance in their theoretical quantum computer.
The results seem promising: the combination of cat states and repetition code produced an architecture in which just over 2,000 superconducting components used for stabilization could produce a hundred logical qubits capable of executing a thousand gates.
"This may fit in a single dilution refrigerator using current or near-term technology and would go far beyond what we can simulate on a classical computer," said Arrangoiz-Arriola and Campbell.
Before the theoretical architecture proposed by the researchers takes shape as a physical device, however, several challenges remain. For example, cat states have already been demonstrated in the lab in previous proof-of-concept experiments, but they are yet to be produced at a useful scale.
The paper nevertheless suggests that AWS is gearing up for quantum computing, as major tech players increasingly enter what appears to be a race for quantum.
IBM recently unveiled a roadmap that eyes a 1,121-qubit system for 2023, and is currently working on a 127-qubit processor. Google's 54-qubit Sycamore chip made headlines in 2019 for achieving quantum supremacy ; and Microsoft recently made its cloud-based quantum ecosystem, Azure Quantum, available for public preview .
Amazon, for its part, launched an AWS-managed service called Amazon Braket, which allows scientists, researchers and developers to experiment with computers from quantum hardware providers , such as D-Wave, IonQ and Rigetti. However, the company is yet to build its own quantum computer. |
|||
558 | Domino's Launches Pizza Delivery Robot Car | Pizza chain Domino's has launched a robot car delivery service to select customers in Houston's Woodland Heights neighborhood, with fully autonomous vehicles from robotics company Nuro transporting orders to opt-in customers. Customers can select robot delivery and receive texts with updates on the vehicle's location, and a numerical code for retrieving the order. Upon arrival, the customer enters the code on the bot's touchscreen, and the vehicle's doors open to permit the removal of ordered food. Domino's said Nuro's robot car was the first fully autonomous, human-free on-road delivery vehicle to be cleared for operation by the U.S. Department of Transportation. Domino's Dennis Maloney said, "This program will allow us to better understand how customers respond to the deliveries, how they interact with the robot, and how it affects store operations." | [] | [] | [] | scitechnews | None | None | None | None | Pizza chain Domino's has launched a robot car delivery service to select customers in Houston's Woodland Heights neighborhood, with fully autonomous vehicles from robotics company Nuro transporting orders to opt-in customers. Customers can select robot delivery and receive texts with updates on the vehicle's location, and a numerical code for retrieving the order. Upon arrival, the customer enters the code on the bot's touchscreen, and the vehicle's doors open to permit the removal of ordered food. Domino's said Nuro's robot car was the first fully autonomous, human-free on-road delivery vehicle to be cleared for operation by the U.S. Department of Transportation. Domino's Dennis Maloney said, "This program will allow us to better understand how customers respond to the deliveries, how they interact with the robot, and how it affects store operations."
|
||||
559 | UNH Researchers Develop Software to Monitor Ocean Soundscape Especially During Covid-19 | DURHAM, N.H. - An international development team, led by researchers at the University of New Hampshire, has created a user-friendly software program that can process sound data collected from the world's oceans in a more standardized format that will enhance research and collaboration and help understand the global sea soundscape dynamics, including the impact of COVID-19 when travel and economic slowdowns put a halt to human activities in the ocean.
"Soundscape analysis can be important in detecting and interpreting changes in ocean ecosystems," said Jennifer Miksis-Olds, research professor and director of UNH's Center for Acoustics Research and Education. "Sound is the dominant sensory mode for marine life and humans for sensing the underwater environment, so understanding how the background ocean sound levels are changing will provide insight into how sensory systems (both biological and electronic) are impacted."
The software program, called MANTA (Making Ambient Noise Trends Accessible), provides a publicly available tool to process ocean audio recordings in a consistent way to support cross-study comparisons over space and time. It generates four data products per day capturing different aspects of ocean sound. The program will be used as part of a plan by the International Quiet Ocean Experiment (IQOE) to look at the effects of COVID-19 when human activities in the ocean, like shipping, fishing and recreational crafts essentially stopped, creating a unique opportunity for a time-series study of the impacts of sound on ocean soundscapes and marine life. Researchers around the world will create a global data repository with MANTA products to document vital ocean soundscapes and effects on the distribution and behavior of vocalizing animals.
"Integrating data on animal behavior based on soundscapes can reveal long-term effects of changes in oceans," said Miksis-Olds. "The challenge with this research has always been that there aren't any formal standardized methods for collecting, processing and reporting ocean sound levels."
Marine animals use sound and natural sonar to navigate and communicate across the ocean. Combined with underwater recording tools, like hydrophones, and methods such as animal tagging, the software program will help reveal the extent to which noise in an anthropogenic sea, impacts ocean species and their ecosystems. Sound travels far in the ocean and a hydrophone can pick up low frequency audio signals from hundreds, even thousands of kilometers away.
Along with UNH, the international software development team included researchers from Cornell University, JASCO Applied Sciences, Loggerhead Instruments and Oregon State University.
The University of New Hampshire inspires innovation and transforms lives in our state, nation, and world. More than 16,000 students from all 50 states and 71 countries engage with an award-winning faculty in top-ranked programs in business, engineering, law, health and human services, liberal arts and the sciences across more than 200 programs of study. As one of the nation's highest-performing research universities, UNH partners with NASA, NOAA, NSF and NIH, and receives more than $110 million in competitive external funding every year to further explore and define the frontiers of land, sea and space.
PHOTO FOR DOWNLOAD Image: https://www.unh.edu/unhtoday/sites/default/files/media/80670gomex04_s06_rh404.1.197368_20191226_daily_min_psd.png Credit: UNH Caption: Image of one of the MANTA data products allowing standardized analysis of ocean sounds to better understand the global ocean soundscape. | An international research team led by University of New Hampshire (UNH) scientists has developed a software program that processes oceanic sound data in a more standardized format, to help understand global sea soundscape dynamics. The publicly available MANTA (Making Ambient Noise Trends Accessible) program can consistently process ocean audio recordings to facilitate comparisons over space and time. It produces four data products daily as part of the International Quiet Ocean Experiment to study the effects of Covid-19-related cessation of human oceanic activity. UNH's Jennifer Miksis-Olds said, "Sound is the dominant sensory mode for marine life and humans for sensing the underwater environment, so understanding how the background ocean sound levels are changing will provide insight into how sensory systems [both biological and electronic] are impacted." | [] | [] | [] | scitechnews | None | None | None | None | An international research team led by University of New Hampshire (UNH) scientists has developed a software program that processes oceanic sound data in a more standardized format, to help understand global sea soundscape dynamics. The publicly available MANTA (Making Ambient Noise Trends Accessible) program can consistently process ocean audio recordings to facilitate comparisons over space and time. It produces four data products daily as part of the International Quiet Ocean Experiment to study the effects of Covid-19-related cessation of human oceanic activity. UNH's Jennifer Miksis-Olds said, "Sound is the dominant sensory mode for marine life and humans for sensing the underwater environment, so understanding how the background ocean sound levels are changing will provide insight into how sensory systems [both biological and electronic] are impacted."
DURHAM, N.H. - An international development team, led by researchers at the University of New Hampshire, has created a user-friendly software program that can process sound data collected from the world's oceans in a more standardized format that will enhance research and collaboration and help understand the global sea soundscape dynamics, including the impact of COVID-19 when travel and economic slowdowns put a halt to human activities in the ocean.
"Soundscape analysis can be important in detecting and interpreting changes in ocean ecosystems," said Jennifer Miksis-Olds, research professor and director of UNH's Center for Acoustics Research and Education. "Sound is the dominant sensory mode for marine life and humans for sensing the underwater environment, so understanding how the background ocean sound levels are changing will provide insight into how sensory systems (both biological and electronic) are impacted."
The software program, called MANTA (Making Ambient Noise Trends Accessible), provides a publicly available tool to process ocean audio recordings in a consistent way to support cross-study comparisons over space and time. It generates four data products per day capturing different aspects of ocean sound. The program will be used as part of a plan by the International Quiet Ocean Experiment (IQOE) to look at the effects of COVID-19 when human activities in the ocean, like shipping, fishing and recreational crafts essentially stopped, creating a unique opportunity for a time-series study of the impacts of sound on ocean soundscapes and marine life. Researchers around the world will create a global data repository with MANTA products to document vital ocean soundscapes and effects on the distribution and behavior of vocalizing animals.
"Integrating data on animal behavior based on soundscapes can reveal long-term effects of changes in oceans," said Miksis-Olds. "The challenge with this research has always been that there aren't any formal standardized methods for collecting, processing and reporting ocean sound levels."
Marine animals use sound and natural sonar to navigate and communicate across the ocean. Combined with underwater recording tools, like hydrophones, and methods such as animal tagging, the software program will help reveal the extent to which noise in an anthropogenic sea, impacts ocean species and their ecosystems. Sound travels far in the ocean and a hydrophone can pick up low frequency audio signals from hundreds, even thousands of kilometers away.
Along with UNH, the international software development team included researchers from Cornell University, JASCO Applied Sciences, Loggerhead Instruments and Oregon State University.
The University of New Hampshire inspires innovation and transforms lives in our state, nation, and world. More than 16,000 students from all 50 states and 71 countries engage with an award-winning faculty in top-ranked programs in business, engineering, law, health and human services, liberal arts and the sciences across more than 200 programs of study. As one of the nation's highest-performing research universities, UNH partners with NASA, NOAA, NSF and NIH, and receives more than $110 million in competitive external funding every year to further explore and define the frontiers of land, sea and space.
PHOTO FOR DOWNLOAD Image: https://www.unh.edu/unhtoday/sites/default/files/media/80670gomex04_s06_rh404.1.197368_20191226_daily_min_psd.png Credit: UNH Caption: Image of one of the MANTA data products allowing standardized analysis of ocean sounds to better understand the global ocean soundscape. |
|||
560 | GIS Technology Helps Map Out How America's Mafia Networks Were 'Connected' | UNIVERSITY PARK, Pa. - At its height in the mid-20th century, American organized crime groups, often called the mafia, grossed approximately $40 billion each year, typically raising that money through illegal or untaxed activities, such as extortion and gambling.
A team of researchers used geographic information systems - a collection of tools for geographic mapping and analysis of the Earth and society - and data from a government database on mafia ties during the 1960s, to examine how these networks were built, maintained and grown. The researchers said that this spatial social networks study offers a unique look at the mafia's loosely affiliated criminal groups. Often called families, these groups were connected - internally and externally - to maintain a balance between security and effectiveness, referred to as the efficiency-security tradeoff.
"In this type of network, there are two competing prerogatives," said Daniel DellaPosta , assistant professor of sociology and social data analytics and affiliate of the Institute for Computational and Data Sciences . "The first is that you want your organization to be structured in a way that allows for effective communication between members, so that they can coordinate their behaviors to achieve goals for the group. However, the second prerogative is security. In a covert network like this, which is trying to evade detection from authorities, you might not want your network to be too well connected because then if one member gets captured, they could implicate a lot of others."
The team investigated this tradeoff by analyzing two specific metrics in the mafia's network connections, according to Clio Andris , assistant professor of city and regional planning and interactive computing, Georgia Tech.
"The efficiency security tradeoff helped us ground the hypothesis that the mafia was optimizing their organization for something," said Andris, who used the mafia data as a case study in graduate seminar in her former role as a Penn State faculty member. "We didn't find a value that we could say how optimized it was, but we were able to use two metrics together to measure optimization. One metric measured how much the network was clustered and the other measured the number of intermediary connections - or 'hops' - between people. We can make those hops into the distance between people."
As one example of that geographic concentration, the researchers found that at least 80% of members in each of New York City's five major crime families - Profaci, Gambino, Genovese, Lucchese and Bonnano - lived within about 18.5 miles of their family's median center in the city. High-ranking members tended to live near the center of those areas.
The researchers, who report their findings in a recent issue of the International Journal of Geographical Information Science , also examined the mafia's networks in both incorporated and nonincorporated cities. In incorporated cities, like New York City, the mafia families tended to operate in distinct neighborhoods, or "turf." However, in nonincorporated, or open, cities, such as Miami, many different families could operate on the same turf. All five of New York City's families sent representatives to Miami, lured there by the growing gambling rackets, the researchers suggested.
The data suggested that a mafia member's connections may have played some role in whether a member was dispatched to the Florida city or not. According to Andris, the people who were sent to Miami tended to be people with limited connections mixed in with a group of better connected - but not optimally connected - people.
According to DellaPosta, the team's findings could have implications for criminology in general, but also for law enforcement practices today.
"Even though our data is from 1960, many of the families in the data are still around in some fashion today," said DellaPosta. "There's reason to think that they organize in a fairly similar way. What we add to that narrative in this paper is the geographic and spatial dimension, as well, which might be important, especially in coordinating efforts across multiple law enforcement agencies."
The researchers used a database of 680 mafia members taken from a 1960 dossier compiled by the U.S. Federal Bureau of Narcotics. The data included connections between members of the mafia geolocated to a known home address in 15 major U.S. cities.
The study started as a lesson in Andris's graduate seminar in GIS while she was a faculty member at Penn State.
"This project was a cool opportunity to use a unique data source and a novel analysis strategy to think about how people use geographic space," said Brittany Freelin, graduate student sociology and criminology, who worked with Andris and DellaPosta on the project. "The mafia data allowed us to map mafia member's addresses and the spatial social networks approach enabled analyses of how those mafia members were distributed across U.S. cities, which allowed us to simultaneously examine the spatial and social network elements of the data."
The team also included Xi Zhu, a former postdoctoral scholar in the GeoVISTA Center; Bradley Hinger, doctoral student in geography; and Hanzhou Chen, a former graduate assistant in geography, all of Penn State. | Researchers at the Pennsylvania State University (Penn State) used geographic information systems and a government database on 1960s mafia ties to study how organized crime networks are built, maintained, and grown. Specifically, they examined the efficiency-security tradeoff in the organization of mafia families by analyzing two specific metrics in their network connections, one measuring how much the network was clustered and another measuring the number of intermediary connections between people. They found that at least 80% of members in each of the five major crime families in New York City lived within about 18.5 miles of their family's median center in the city. They also determined that mafia families in incorporated cities generally operated in distinct neighborhoods, while those in nonincorporated cities could operate on the same "turf." Penn State's Daniel DellaPosta said geographic and spatial dimension could be important in coordinating efforts across multiple law enforcement agencies. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the Pennsylvania State University (Penn State) used geographic information systems and a government database on 1960s mafia ties to study how organized crime networks are built, maintained, and grown. Specifically, they examined the efficiency-security tradeoff in the organization of mafia families by analyzing two specific metrics in their network connections, one measuring how much the network was clustered and another measuring the number of intermediary connections between people. They found that at least 80% of members in each of the five major crime families in New York City lived within about 18.5 miles of their family's median center in the city. They also determined that mafia families in incorporated cities generally operated in distinct neighborhoods, while those in nonincorporated cities could operate on the same "turf." Penn State's Daniel DellaPosta said geographic and spatial dimension could be important in coordinating efforts across multiple law enforcement agencies.
UNIVERSITY PARK, Pa. - At its height in the mid-20th century, American organized crime groups, often called the mafia, grossed approximately $40 billion each year, typically raising that money through illegal or untaxed activities, such as extortion and gambling.
A team of researchers used geographic information systems - a collection of tools for geographic mapping and analysis of the Earth and society - and data from a government database on mafia ties during the 1960s, to examine how these networks were built, maintained and grown. The researchers said that this spatial social networks study offers a unique look at the mafia's loosely affiliated criminal groups. Often called families, these groups were connected - internally and externally - to maintain a balance between security and effectiveness, referred to as the efficiency-security tradeoff.
"In this type of network, there are two competing prerogatives," said Daniel DellaPosta , assistant professor of sociology and social data analytics and affiliate of the Institute for Computational and Data Sciences . "The first is that you want your organization to be structured in a way that allows for effective communication between members, so that they can coordinate their behaviors to achieve goals for the group. However, the second prerogative is security. In a covert network like this, which is trying to evade detection from authorities, you might not want your network to be too well connected because then if one member gets captured, they could implicate a lot of others."
The team investigated this tradeoff by analyzing two specific metrics in the mafia's network connections, according to Clio Andris , assistant professor of city and regional planning and interactive computing, Georgia Tech.
"The efficiency security tradeoff helped us ground the hypothesis that the mafia was optimizing their organization for something," said Andris, who used the mafia data as a case study in graduate seminar in her former role as a Penn State faculty member. "We didn't find a value that we could say how optimized it was, but we were able to use two metrics together to measure optimization. One metric measured how much the network was clustered and the other measured the number of intermediary connections - or 'hops' - between people. We can make those hops into the distance between people."
As one example of that geographic concentration, the researchers found that at least 80% of members in each of New York City's five major crime families - Profaci, Gambino, Genovese, Lucchese and Bonnano - lived within about 18.5 miles of their family's median center in the city. High-ranking members tended to live near the center of those areas.
The researchers, who report their findings in a recent issue of the International Journal of Geographical Information Science , also examined the mafia's networks in both incorporated and nonincorporated cities. In incorporated cities, like New York City, the mafia families tended to operate in distinct neighborhoods, or "turf." However, in nonincorporated, or open, cities, such as Miami, many different families could operate on the same turf. All five of New York City's families sent representatives to Miami, lured there by the growing gambling rackets, the researchers suggested.
The data suggested that a mafia member's connections may have played some role in whether a member was dispatched to the Florida city or not. According to Andris, the people who were sent to Miami tended to be people with limited connections mixed in with a group of better connected - but not optimally connected - people.
According to DellaPosta, the team's findings could have implications for criminology in general, but also for law enforcement practices today.
"Even though our data is from 1960, many of the families in the data are still around in some fashion today," said DellaPosta. "There's reason to think that they organize in a fairly similar way. What we add to that narrative in this paper is the geographic and spatial dimension, as well, which might be important, especially in coordinating efforts across multiple law enforcement agencies."
The researchers used a database of 680 mafia members taken from a 1960 dossier compiled by the U.S. Federal Bureau of Narcotics. The data included connections between members of the mafia geolocated to a known home address in 15 major U.S. cities.
The study started as a lesson in Andris's graduate seminar in GIS while she was a faculty member at Penn State.
"This project was a cool opportunity to use a unique data source and a novel analysis strategy to think about how people use geographic space," said Brittany Freelin, graduate student sociology and criminology, who worked with Andris and DellaPosta on the project. "The mafia data allowed us to map mafia member's addresses and the spatial social networks approach enabled analyses of how those mafia members were distributed across U.S. cities, which allowed us to simultaneously examine the spatial and social network elements of the data."
The team also included Xi Zhu, a former postdoctoral scholar in the GeoVISTA Center; Bradley Hinger, doctoral student in geography; and Hanzhou Chen, a former graduate assistant in geography, all of Penn State. |
|||
561 | ACM Prize Awarded to Pioneer in Quantum Computing | New York, NY, April 14, 2021 - ACM, the Association for Computing Machinery, today announced that Scott Aaronson has been named the recipient of the 2020 ACM Prize in Computing for groundbreaking contributions to quantum computing. Aaronson is the David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin.
The goal of quantum computing is to harness the laws of quantum physics to build devices that can solve problems that classical computers either cannot solve, or not solve in any reasonable amount of time. Aaronson showed how results from computational complexity theory can provide new insights into the laws of quantum physics, and brought clarity to what quantum computers will, and will not, be able to do.
Aaronson helped develop the concept of quantum supremacy, which denotes the milestone that is achieved when a quantum device can solve a problem that no classical computer can solve in a reasonable amount of time. Aaronson established many of the theoretical foundations of quantum supremacy experiments. Such experiments allow scientists to give convincing evidence that quantum computers provide exponential speedups without having to first build a full fault-tolerant quantum computer.
The ACM Prize in Computing recognizes early-to-mid-career computer scientists whose research contributions have fundamental impact and broad implications. The award carries a prize of $250,000, from an endowment provided by Infosys Ltd.
"Few areas of technology have as much potential as quantum computation," said ACM President Gabriele Kotsis. "Despite being at a relatively early stage in his career, Scott Aaronson is esteemed by his colleagues for the breadth and depth of his contributions. He has helped guide the development of this new field, while clarifying its possibilities as a leading educator and superb communicator. Importantly, his contributions have not been confined to quantum computation, but have had significant impact in areas such as computational complexity theory and physics."
Notable Contributions
Boson Sampling: In the paper " The Computational Complexity of Linear Optics, " Aaronson and co-author Alex Arkhipov gave evidence that rudimentary quantum computers built entirely out of linear-optical elements cannot be efficiently simulated by classical computers.
Aaronson has since explored how quantum supremacy experiments could deliver a key application of quantum computing, namely the generation of cryptographically random bits.
Fundamental Limits of Quantum Computers: In his 2002 paper " Quantum lower bound for the collision problem ," Aaronson proved the quantum lower bound for the collision problem, which was a major open problem for years. This work bounds the minimum time for a quantum computer to find collisions in many-to-one functions, giving evidence that a basic building block of cryptography will remain secure for quantum computers.
Classical Complexity Theory: Aaronson is well-known for his work on "algebrization" , a technique he invented with Avi Wigderson to understand the limits of algebraic techniques for separating and collapsing complexity classes.
Making Quantum Computing Accessible: Beyond his technical contributions, Aaronson is credited with making quantum computing understandable to a wide audience. Through his many efforts, he has become recognized as a leading spokesperson for the field. He maintains a popular blog, Shtetl Optimized , where he explains timely and exciting topics in quantum computing in a simple and effective way. His posts, which range from fundamental theory questions to debates about current quantum devices, are widely read and trigger many interesting discussions.
Aaronson also authored Quantum Computing Since Democritus , a respected book on quantum computing, written several articles for a popular science audience, and presented TED Talks to dispel misconceptions and provide the public with a more accurate overview of the field.
"Infosys is proud to fund the ACM Prize in Computing and we congratulate Scott Aaronson on being this year's recipient," said Pravin Rao, COO of Infosys. "When the effort to build quantum computation devices was first seriously explored in the 1990s, some labeled it as science fiction. While the realization of a fully functional quantum computer may still be in the future, this is certainly not science fiction. The successful quantum hardware experiments by Google and others have been a marvel to many who are following these developments. Scott Aaronson has been a leading figure in this area of research and his contributions will continue to focus and guide the field as it reaches its remarkable potential."
Biographical Background
Scott Aaronson is the David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary area of research is theoretical computer science, and his research interests center around the capabilities and limits of quantum computers, and computational complexity theory more generally.
A graduate of Cornell University, Aaronson earned a PhD in Computer Science from the University of California, Berkeley. His honors include the Tomassoni-Chisesi Prize in Physics (2018), a Simons Investigator Award (2017), and the Alan T. Waterman Award of the National Science Foundation (2012).
The ACM Prize in Computing recognizes an early- to mid-career fundamental innovative contribution in computing that, through its depth, impact and broad implications, exemplifies the greatest achievements in the discipline. The award carries a prize of $250,000. Financial support is provided by an endowment from Infosys Ltd. The ACM Prize in Computing was previously known as the ACM-Infosys Foundation Award in the Computing Sciences from 2007 through 2015. ACM Prize recipients are invited to participate in the Heidelberg Laureate Forum, an annual networking event that brings together young researchers from around the world with recipients of the ACM A.M. Turing Award (computer science), the Abel Prize (mathematics), the Fields Medal (mathematics), and the Nevanlinna Prize (mathematics).
ACM, the Association for Computing Machinery , is the world's largest educational and scientific computing society, uniting educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges. ACM strengthens the computing profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.
Infosys is a global leader in next-generation digital services and consulting. We enable clients in 46 countries to navigate their digital transformation. With over three decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem. | ACM has named Scott Aaronson the recipient of the 2020 ACM Prize in Computing for his pioneering contributions to quantum computing. Aaronson helped develop the concept of quantum supremacy, which is when a quantum device is able to solve a problem that classical computers cannot solve in a reasonable amount of time. The University of Texas at Austin professor established many theoretical precepts of quantum supremacy experiments, and has researched how such experiments could facilitate the generation of cryptographically random bits. ACM president Gabriele Kotsis said, "Importantly, his contributions have not been confined to quantum computation, but have had significant impact in areas such as computational complexity theory and physics." | [] | [] | [] | scitechnews | None | None | None | None | ACM has named Scott Aaronson the recipient of the 2020 ACM Prize in Computing for his pioneering contributions to quantum computing. Aaronson helped develop the concept of quantum supremacy, which is when a quantum device is able to solve a problem that classical computers cannot solve in a reasonable amount of time. The University of Texas at Austin professor established many theoretical precepts of quantum supremacy experiments, and has researched how such experiments could facilitate the generation of cryptographically random bits. ACM president Gabriele Kotsis said, "Importantly, his contributions have not been confined to quantum computation, but have had significant impact in areas such as computational complexity theory and physics."
New York, NY, April 14, 2021 - ACM, the Association for Computing Machinery, today announced that Scott Aaronson has been named the recipient of the 2020 ACM Prize in Computing for groundbreaking contributions to quantum computing. Aaronson is the David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin.
The goal of quantum computing is to harness the laws of quantum physics to build devices that can solve problems that classical computers either cannot solve, or not solve in any reasonable amount of time. Aaronson showed how results from computational complexity theory can provide new insights into the laws of quantum physics, and brought clarity to what quantum computers will, and will not, be able to do.
Aaronson helped develop the concept of quantum supremacy, which denotes the milestone that is achieved when a quantum device can solve a problem that no classical computer can solve in a reasonable amount of time. Aaronson established many of the theoretical foundations of quantum supremacy experiments. Such experiments allow scientists to give convincing evidence that quantum computers provide exponential speedups without having to first build a full fault-tolerant quantum computer.
The ACM Prize in Computing recognizes early-to-mid-career computer scientists whose research contributions have fundamental impact and broad implications. The award carries a prize of $250,000, from an endowment provided by Infosys Ltd.
"Few areas of technology have as much potential as quantum computation," said ACM President Gabriele Kotsis. "Despite being at a relatively early stage in his career, Scott Aaronson is esteemed by his colleagues for the breadth and depth of his contributions. He has helped guide the development of this new field, while clarifying its possibilities as a leading educator and superb communicator. Importantly, his contributions have not been confined to quantum computation, but have had significant impact in areas such as computational complexity theory and physics."
Notable Contributions
Boson Sampling: In the paper " The Computational Complexity of Linear Optics, " Aaronson and co-author Alex Arkhipov gave evidence that rudimentary quantum computers built entirely out of linear-optical elements cannot be efficiently simulated by classical computers.
Aaronson has since explored how quantum supremacy experiments could deliver a key application of quantum computing, namely the generation of cryptographically random bits.
Fundamental Limits of Quantum Computers: In his 2002 paper " Quantum lower bound for the collision problem ," Aaronson proved the quantum lower bound for the collision problem, which was a major open problem for years. This work bounds the minimum time for a quantum computer to find collisions in many-to-one functions, giving evidence that a basic building block of cryptography will remain secure for quantum computers.
Classical Complexity Theory: Aaronson is well-known for his work on "algebrization" , a technique he invented with Avi Wigderson to understand the limits of algebraic techniques for separating and collapsing complexity classes.
Making Quantum Computing Accessible: Beyond his technical contributions, Aaronson is credited with making quantum computing understandable to a wide audience. Through his many efforts, he has become recognized as a leading spokesperson for the field. He maintains a popular blog, Shtetl Optimized , where he explains timely and exciting topics in quantum computing in a simple and effective way. His posts, which range from fundamental theory questions to debates about current quantum devices, are widely read and trigger many interesting discussions.
Aaronson also authored Quantum Computing Since Democritus , a respected book on quantum computing, written several articles for a popular science audience, and presented TED Talks to dispel misconceptions and provide the public with a more accurate overview of the field.
"Infosys is proud to fund the ACM Prize in Computing and we congratulate Scott Aaronson on being this year's recipient," said Pravin Rao, COO of Infosys. "When the effort to build quantum computation devices was first seriously explored in the 1990s, some labeled it as science fiction. While the realization of a fully functional quantum computer may still be in the future, this is certainly not science fiction. The successful quantum hardware experiments by Google and others have been a marvel to many who are following these developments. Scott Aaronson has been a leading figure in this area of research and his contributions will continue to focus and guide the field as it reaches its remarkable potential."
Biographical Background
Scott Aaronson is the David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary area of research is theoretical computer science, and his research interests center around the capabilities and limits of quantum computers, and computational complexity theory more generally.
A graduate of Cornell University, Aaronson earned a PhD in Computer Science from the University of California, Berkeley. His honors include the Tomassoni-Chisesi Prize in Physics (2018), a Simons Investigator Award (2017), and the Alan T. Waterman Award of the National Science Foundation (2012).
The ACM Prize in Computing recognizes an early- to mid-career fundamental innovative contribution in computing that, through its depth, impact and broad implications, exemplifies the greatest achievements in the discipline. The award carries a prize of $250,000. Financial support is provided by an endowment from Infosys Ltd. The ACM Prize in Computing was previously known as the ACM-Infosys Foundation Award in the Computing Sciences from 2007 through 2015. ACM Prize recipients are invited to participate in the Heidelberg Laureate Forum, an annual networking event that brings together young researchers from around the world with recipients of the ACM A.M. Turing Award (computer science), the Abel Prize (mathematics), the Fields Medal (mathematics), and the Nevanlinna Prize (mathematics).
ACM, the Association for Computing Machinery , is the world's largest educational and scientific computing society, uniting educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges. ACM strengthens the computing profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.
Infosys is a global leader in next-generation digital services and consulting. We enable clients in 46 countries to navigate their digital transformation. With over three decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem. |
|||
563 | FBI Launched Operation to Wipe Out Hacker Access to Hundreds of U.S. Servers | The U.S. Department of Justice (DOJ) said the Federal Bureau of Investigation (FBI) has launched a campaign to eliminate hacker access to hundreds of U.S.-based servers exposed by a bug in Microsoft Exchange software discovered earlier this year. The flaw gave hackers back doors into the servers of at least 30,000 U.S. organizations. Although the number of vulnerable servers have been reduced, attackers have already installed malware on thousands to open a separate route of infiltration; DOJ said hundreds of Web shells remained on certain U.S.-based computers running Microsoft Exchange by the end of March. The department's court filing said Microsoft identified the initial intruders as members of the China-sponsored HAFNIUM group. | [] | [] | [] | scitechnews | None | None | None | None | The U.S. Department of Justice (DOJ) said the Federal Bureau of Investigation (FBI) has launched a campaign to eliminate hacker access to hundreds of U.S.-based servers exposed by a bug in Microsoft Exchange software discovered earlier this year. The flaw gave hackers back doors into the servers of at least 30,000 U.S. organizations. Although the number of vulnerable servers have been reduced, attackers have already installed malware on thousands to open a separate route of infiltration; DOJ said hundreds of Web shells remained on certain U.S.-based computers running Microsoft Exchange by the end of March. The department's court filing said Microsoft identified the initial intruders as members of the China-sponsored HAFNIUM group.
|
||||
564 | 'Master,' 'Slave,' and the Fight Over Offensive Terms in Computing | While the fight over terminology reflects the intractability of racial issues in society, it is also indicative of a peculiar organizational culture that relies on informal consensus to get things done.
The Internet Engineering Task Force eschews voting, and it often measures consensus by asking opposing factions of engineers to hum during meetings. The hums are then assessed by volume and ferocity. Vigorous humming, even from only a few people, could indicate strong disagreement, a sign that consensus has not yet been reached.
The I.E.T.F. has created rigorous standards for the internet and for itself. Until 2016, it required the documents in which its standards are published to be precisely 72 characters wide and 58 lines long, a format adapted from the era when programmers punched their code into paper cards and fed them into early IBM computers.
"We have big fights with each other, but our intent is always to reach consensus," said Vint Cerf, one of the founders of the task force and a vice president at Google. "I think that the spirit of the I.E.T.F. still is that, if we're going to do anything, let's try to do it one way so that we can have a uniform expectation that things will function ."
The group is made up of about 7,000 volunteers from around the world. It has two full-time employees, an executive director and a spokesman, whose work is primarily funded by meeting dues and the registration fees of dot-org internet domains. It cannot force giants like Amazon or Apple to follow its guidance, but tech companies often choose to do so because the I.E.T.F. has created elegant solutions for engineering problems. | The Internet Engineering Task Force (IETF) is working to eliminate computer engineering terms that evoke racist history, including "master," "slave," "whitelist," and "blacklist." Some companies and technology organizations already have started changing some of these technical terms, raising concerns about consistency as the effort has stalled amid conversations about the history of slavery and racism in technology. IETF's Lars Eggert said he hopes guidance on terminology will be released later this year. In the meantime, GitHub is now using "main" instead of "master," and the programming community that maintains MySQL opted for "source" and "replica" to replace "master" and "slave." | [] | [] | [] | scitechnews | None | None | None | None | The Internet Engineering Task Force (IETF) is working to eliminate computer engineering terms that evoke racist history, including "master," "slave," "whitelist," and "blacklist." Some companies and technology organizations already have started changing some of these technical terms, raising concerns about consistency as the effort has stalled amid conversations about the history of slavery and racism in technology. IETF's Lars Eggert said he hopes guidance on terminology will be released later this year. In the meantime, GitHub is now using "main" instead of "master," and the programming community that maintains MySQL opted for "source" and "replica" to replace "master" and "slave."
While the fight over terminology reflects the intractability of racial issues in society, it is also indicative of a peculiar organizational culture that relies on informal consensus to get things done.
The Internet Engineering Task Force eschews voting, and it often measures consensus by asking opposing factions of engineers to hum during meetings. The hums are then assessed by volume and ferocity. Vigorous humming, even from only a few people, could indicate strong disagreement, a sign that consensus has not yet been reached.
The I.E.T.F. has created rigorous standards for the internet and for itself. Until 2016, it required the documents in which its standards are published to be precisely 72 characters wide and 58 lines long, a format adapted from the era when programmers punched their code into paper cards and fed them into early IBM computers.
"We have big fights with each other, but our intent is always to reach consensus," said Vint Cerf, one of the founders of the task force and a vice president at Google. "I think that the spirit of the I.E.T.F. still is that, if we're going to do anything, let's try to do it one way so that we can have a uniform expectation that things will function ."
The group is made up of about 7,000 volunteers from around the world. It has two full-time employees, an executive director and a spokesman, whose work is primarily funded by meeting dues and the registration fees of dot-org internet domains. It cannot force giants like Amazon or Apple to follow its guidance, but tech companies often choose to do so because the I.E.T.F. has created elegant solutions for engineering problems. |
|||
565 | CMU's Snakebot Goes for a Swim | Carnegie Mellon University's acclaimed snake-like robot can now slither its way underwater, allowing the modular robotics platform to inspect ships, submarines and infrastructure for damage.
A team from the Biorobotics Lab in the School of Computer Science's Robotics Institute tested the Hardened Underwater Modular Robot Snake (HUMRS) last month in the university's pool, diving the robot through underwater hoops, showing off its precise and smooth swimming, and demonstrating its ease of control.
"We can go places that other robots cannot," said Howie Choset , the Kavčić-Moura Professor of Computer Science. "It can snake around and squeeze into hard-to-reach underwater spaces."
The project is led by Choset and Matt Travers , co-directors of the Biorobotics Lab. The submersible robot snake was developed through a grant from the Advanced Robotics for Manufacturing (ARM) Institute . The project aims to assist the Department of Defense with inspecting ships, submarines and other underwater infrastructure for damage or as part of routine maintenance, said Matt Fischer, the program manager at the ARM Institute working on the project.
The military has limited options for inspecting areas like a ship's hull. To do so, the Navy must either send a team of divers to the ship's location, wait until it returns to port to deploy the divers, or pull it into a dry dock - all options that take time and money.
A submersible robot snake could allow the Navy to inspect the ship at sea, immediately alerting the crew to critical damage or sending information about issues that need attention back to port for use when the ship docks.
"If they can get that information before the ship comes into a home port or a dry dock, that saves weeks or months of time in a maintenance schedule," said Fischer, who served in the Navy for three years. "And in turn, that saves money."
Fischer, who crawled into the ballast tanks of a submarine during his service, said many sailors would gladly pass that difficult and tight duty to a robot.
Steve McKee, a co-lead of the Joint Robotics Organization for Building Organic Technologies (JROBOT), a Department of Defense task force interested in technology like the submersible robot snake, said the project is a great example of a partnership between CMU, the ARM Institute, and the Department of Defense that will improve the readiness of equipment in the armed services.
"The advancements being made hold great promise for helping not only the Department of Defense but also various industries around the world," McKee said. | A snake-like robot developed by Carnegie Mellon University (CMU) researchers can move smoothly and precisely while submerged in water. Developed with a grant from the Advanced Robotics for Manufacturing Institute, the Hardened Underwater Modular Robot Snake could be used to help the military inspect ships, submarines, and infrastructure for damage while at sea, saving time and money by not needing to wait until a ship enters its home port or dry dock. CMU's Nate Shoemaker-Trejo said the robot's main distinguishing features are its form factor and flexibility, explaining, "The robot snake is narrow and jointed. The end result is that an underwater robot snake can squeeze around corners and into small spaces where regular submersibles can't go." | [] | [] | [] | scitechnews | None | None | None | None | A snake-like robot developed by Carnegie Mellon University (CMU) researchers can move smoothly and precisely while submerged in water. Developed with a grant from the Advanced Robotics for Manufacturing Institute, the Hardened Underwater Modular Robot Snake could be used to help the military inspect ships, submarines, and infrastructure for damage while at sea, saving time and money by not needing to wait until a ship enters its home port or dry dock. CMU's Nate Shoemaker-Trejo said the robot's main distinguishing features are its form factor and flexibility, explaining, "The robot snake is narrow and jointed. The end result is that an underwater robot snake can squeeze around corners and into small spaces where regular submersibles can't go."
Carnegie Mellon University's acclaimed snake-like robot can now slither its way underwater, allowing the modular robotics platform to inspect ships, submarines and infrastructure for damage.
A team from the Biorobotics Lab in the School of Computer Science's Robotics Institute tested the Hardened Underwater Modular Robot Snake (HUMRS) last month in the university's pool, diving the robot through underwater hoops, showing off its precise and smooth swimming, and demonstrating its ease of control.
"We can go places that other robots cannot," said Howie Choset , the Kavčić-Moura Professor of Computer Science. "It can snake around and squeeze into hard-to-reach underwater spaces."
The project is led by Choset and Matt Travers , co-directors of the Biorobotics Lab. The submersible robot snake was developed through a grant from the Advanced Robotics for Manufacturing (ARM) Institute . The project aims to assist the Department of Defense with inspecting ships, submarines and other underwater infrastructure for damage or as part of routine maintenance, said Matt Fischer, the program manager at the ARM Institute working on the project.
The military has limited options for inspecting areas like a ship's hull. To do so, the Navy must either send a team of divers to the ship's location, wait until it returns to port to deploy the divers, or pull it into a dry dock - all options that take time and money.
A submersible robot snake could allow the Navy to inspect the ship at sea, immediately alerting the crew to critical damage or sending information about issues that need attention back to port for use when the ship docks.
"If they can get that information before the ship comes into a home port or a dry dock, that saves weeks or months of time in a maintenance schedule," said Fischer, who served in the Navy for three years. "And in turn, that saves money."
Fischer, who crawled into the ballast tanks of a submarine during his service, said many sailors would gladly pass that difficult and tight duty to a robot.
Steve McKee, a co-lead of the Joint Robotics Organization for Building Organic Technologies (JROBOT), a Department of Defense task force interested in technology like the submersible robot snake, said the project is a great example of a partnership between CMU, the ARM Institute, and the Department of Defense that will improve the readiness of equipment in the armed services.
"The advancements being made hold great promise for helping not only the Department of Defense but also various industries around the world," McKee said. |
|||
566 | Ford Retools Headquarters for Hybrid Work | Known as the Glass House, its main 12-story building has sat mostly empty since mid-March of last year, when most of the company's roughly 30,000 employees who work in or near the campus - from sales, marketing and human resources staff, to designers and engineers - shifted to remote work to guard against the spread of Covid-19.
The company's U.S. auto plants shutdown for about two months last year, before reopening in May with added safety measures.
"Since then we've been pivoting to meet the needs of a hybrid workforce that we're trying to create here," in which staff at home work with colleagues in the office and around the world on a permanent basis, said Maru Flores, who leads Ford's global collaboration and client productivity services team and reports to the office of the chief information officer. Jeff Lemmer, Ford's former CIO, retired in January. A replacement has yet to be named.
Ms. Flores said that process began last summer when small groups of employees went back to their workstations on a staggered schedule to gather laptops, keyboards, monitors, ergonomic chairs, family photos and anything else they needed to work from home for an extended period.
The move was aimed at ensuring workers at home had access to the same workstation amenities as they did in the office, a critical but often overlooked component of hybrid workplace models, Ms. Flores said: "We equipped them to work from home, both comfortably and from a tech perspective."
Chris Howard, distinguished research vice president at information-technology research and consulting firm Gartner Inc., said a key challenge for IT leaders of outfitting a hybrid workplace is "ensuring parity among remote and co-located workers."
He said companies need to find ways to leverage digital tools with a goal of maintaining the quality and engagement of workers that remain remote, versus those who choose to work from the office.
Once remote-work capabilities were deemed to be in good shape, Ms. Flores turned her attention to the physical workplace. Her team developed a check-in app, requiring employees to fill out a health questionnaire to access the Dearborn campus.
The campus over the past four years has been undergoing a massive redesign, which has included adding more open collaborative spaces to a traditional mix of offices and cubicles.
Ms. Flores said ambient sensors have been installed throughout the main building, connected by Internet-of-Things software, to alert floor managers when too many people are gathered in any one space, such as enclosed offices. Her team also is testing a new system to enable workers to reserve predefined spaces in common areas for informal meetings, Ms. Flores said.
To increase the availability of virtual meeting rooms, the team has put together mobile videoconferencing carts, which can be rolled into any vacant space. The carts are fitted out with the same high-end cameras and larger, high-resolution screens typical of a conference or boardroom.
"The key thing about a hybrid workplace model is making sure workers in the office and at home feel equally connected to the workplace," Ms. Flores said. In addition to continually upgrading document sharing, messaging and videoconferencing tools - and an online "whiteboard on steroids," as Ms. Flores calls it - her team has created informal online spaces where workers can have spontaneous gatherings.
Using data analytics and artificial intelligence, the team is set to launch software that can anticipate workers' needs, such as access to documents or conference-room reservations, by analyzing shared workflows and schedules. It will also automatically generate alerts if an employee - at home or in the office - is overworked or due for time off, Ms. Flores.
For added network security, the company is implementing information rights protection, which enables users to encrypt their data and create access permissions, as well as two-factor authentication for some data-sensitive apps.
The timeline for transitioning to a hybrid workplace model became murkier in recent weeks, as Michigan grappled with the nation's most severe resurgence in Covid-19 cases.
Like many companies, Ford had initially targeted a July 2020 return for office workers, when the spread of the virus began to decline, but then pushed the date back after reported cases climbed back up.
The state of the virus is still very fluid, Ms. Flores said, adding that workers' health and safety is a priority.
"At this point we don't know what the future entails," Ms. Flores said.
Write to Angus Loten at [email protected] | Automaker Ford is redesigning its Dearborn, MI, corporate headquarters to accommodate a hybrid workforce. Ford's Maru Flores said her team has devised a check-in application requiring workers to fill out a health questionnaire to access the Dearborn campus; the main building also has ambient sensors connected by Internet of Things software to notify floor managers when too many people are in any one space. Flores' team also is testing a system to let employees reserve predefined spaces in common areas for informal meetings, while mobile videoconferencing carts will support the availability of virtual meeting rooms. Said Flores, "The key thing about a hybrid workplace model is making sure workers in the office and at home feel equally connected to the workplace." | [] | [] | [] | scitechnews | None | None | None | None | Automaker Ford is redesigning its Dearborn, MI, corporate headquarters to accommodate a hybrid workforce. Ford's Maru Flores said her team has devised a check-in application requiring workers to fill out a health questionnaire to access the Dearborn campus; the main building also has ambient sensors connected by Internet of Things software to notify floor managers when too many people are in any one space. Flores' team also is testing a system to let employees reserve predefined spaces in common areas for informal meetings, while mobile videoconferencing carts will support the availability of virtual meeting rooms. Said Flores, "The key thing about a hybrid workplace model is making sure workers in the office and at home feel equally connected to the workplace."
Known as the Glass House, its main 12-story building has sat mostly empty since mid-March of last year, when most of the company's roughly 30,000 employees who work in or near the campus - from sales, marketing and human resources staff, to designers and engineers - shifted to remote work to guard against the spread of Covid-19.
The company's U.S. auto plants shutdown for about two months last year, before reopening in May with added safety measures.
"Since then we've been pivoting to meet the needs of a hybrid workforce that we're trying to create here," in which staff at home work with colleagues in the office and around the world on a permanent basis, said Maru Flores, who leads Ford's global collaboration and client productivity services team and reports to the office of the chief information officer. Jeff Lemmer, Ford's former CIO, retired in January. A replacement has yet to be named.
Ms. Flores said that process began last summer when small groups of employees went back to their workstations on a staggered schedule to gather laptops, keyboards, monitors, ergonomic chairs, family photos and anything else they needed to work from home for an extended period.
The move was aimed at ensuring workers at home had access to the same workstation amenities as they did in the office, a critical but often overlooked component of hybrid workplace models, Ms. Flores said: "We equipped them to work from home, both comfortably and from a tech perspective."
Chris Howard, distinguished research vice president at information-technology research and consulting firm Gartner Inc., said a key challenge for IT leaders of outfitting a hybrid workplace is "ensuring parity among remote and co-located workers."
He said companies need to find ways to leverage digital tools with a goal of maintaining the quality and engagement of workers that remain remote, versus those who choose to work from the office.
Once remote-work capabilities were deemed to be in good shape, Ms. Flores turned her attention to the physical workplace. Her team developed a check-in app, requiring employees to fill out a health questionnaire to access the Dearborn campus.
The campus over the past four years has been undergoing a massive redesign, which has included adding more open collaborative spaces to a traditional mix of offices and cubicles.
Ms. Flores said ambient sensors have been installed throughout the main building, connected by Internet-of-Things software, to alert floor managers when too many people are gathered in any one space, such as enclosed offices. Her team also is testing a new system to enable workers to reserve predefined spaces in common areas for informal meetings, Ms. Flores said.
To increase the availability of virtual meeting rooms, the team has put together mobile videoconferencing carts, which can be rolled into any vacant space. The carts are fitted out with the same high-end cameras and larger, high-resolution screens typical of a conference or boardroom.
"The key thing about a hybrid workplace model is making sure workers in the office and at home feel equally connected to the workplace," Ms. Flores said. In addition to continually upgrading document sharing, messaging and videoconferencing tools - and an online "whiteboard on steroids," as Ms. Flores calls it - her team has created informal online spaces where workers can have spontaneous gatherings.
Using data analytics and artificial intelligence, the team is set to launch software that can anticipate workers' needs, such as access to documents or conference-room reservations, by analyzing shared workflows and schedules. It will also automatically generate alerts if an employee - at home or in the office - is overworked or due for time off, Ms. Flores.
For added network security, the company is implementing information rights protection, which enables users to encrypt their data and create access permissions, as well as two-factor authentication for some data-sensitive apps.
The timeline for transitioning to a hybrid workplace model became murkier in recent weeks, as Michigan grappled with the nation's most severe resurgence in Covid-19 cases.
Like many companies, Ford had initially targeted a July 2020 return for office workers, when the spread of the virus began to decline, but then pushed the date back after reported cases climbed back up.
The state of the virus is still very fluid, Ms. Flores said, adding that workers' health and safety is a priority.
"At this point we don't know what the future entails," Ms. Flores said.
Write to Angus Loten at [email protected] |
|||
568 | IBM Releases Qiskit Modules That Use Quantum Computers to Improve ML | IBM has released the Qiskit Machine Learning suite of application modules as part of its effort to encourage developers to experiment with quantum computers. The company's Qiskit Applications Team said the modules promise to help optimize machine learning (ML) by tapping quantum systems for certain process components. The team said, "Quantum machine learning (QML) proposes new types of models that leverage quantum computers' unique capabilities to, for example, work in exponentially higher-dimensional feature spaces to improve the accuracy of models." IBM expects quantum computers to gain market momentum by performing specific tasks that are offloaded from classic computers to a quantum platform. | [] | [] | [] | scitechnews | None | None | None | None | IBM has released the Qiskit Machine Learning suite of application modules as part of its effort to encourage developers to experiment with quantum computers. The company's Qiskit Applications Team said the modules promise to help optimize machine learning (ML) by tapping quantum systems for certain process components. The team said, "Quantum machine learning (QML) proposes new types of models that leverage quantum computers' unique capabilities to, for example, work in exponentially higher-dimensional feature spaces to improve the accuracy of models." IBM expects quantum computers to gain market momentum by performing specific tasks that are offloaded from classic computers to a quantum platform.
|
||||
569 | Amazon Expanding 'Upskill' Training for Software Developer Roles to Workers Outside Company | Amazon is expanding its Amazon Technical Academy to train people outside the company for jobs in software engineering. The company's training partners, Kenzie Academy, which offers software engineering and UX design programs, and Lambda School, will adopt the Amazon Technical Academy curriculum; they also plan to recruit a diverse student body. Both will offer full-time, fully remote courses, with the Lambda School's Enterprise Backend Development Program lasting nine months and Kenzie Academy's Software Engineering Program lasting nine to 12 months. Amazon's Ashley Rajagopal wrote in a blog post, "We have intentionally evolved our curriculum and teaching approach to be accessible to participants who didn't have the opportunity, either because of background or financial limitations, to pursue a college degree in software engineering." | [] | [] | [] | scitechnews | None | None | None | None | Amazon is expanding its Amazon Technical Academy to train people outside the company for jobs in software engineering. The company's training partners, Kenzie Academy, which offers software engineering and UX design programs, and Lambda School, will adopt the Amazon Technical Academy curriculum; they also plan to recruit a diverse student body. Both will offer full-time, fully remote courses, with the Lambda School's Enterprise Backend Development Program lasting nine months and Kenzie Academy's Software Engineering Program lasting nine to 12 months. Amazon's Ashley Rajagopal wrote in a blog post, "We have intentionally evolved our curriculum and teaching approach to be accessible to participants who didn't have the opportunity, either because of background or financial limitations, to pursue a college degree in software engineering."
|
||||
570 | Researcher Uses Bat-Inspired Design to Develop Approach to Sound Location | Inspired by the workings of a bat's ear, Rolf Mueller, a professor of mechanical engineering at Virginia Tech, has created bio-inspired technology that determines the location of a sound's origin.
Mueller's development works from a simpler and more accurate model of sound location than previous approaches, which have traditionally been modeled after the human ear. His work marks the first new insight for determining sound location in 50 years.
The findings were published in Nature Machine Intelligence by Mueller and a former Ph.D. student, lead author Xiaoyan Yin.
"I have long admired bats for their uncanny ability to navigate complex natural environments based on ultrasound and suspected that the unusual mobility of the animal's ears might have something to do with this," said Mueller.
A new model for sound location
Bats navigate as they fly by using echolocation, determining how close an object is by continuously emitting sounds and listening to the echoes. Ultrasonic calls are emitted from the bat's mouth or nose, bouncing off the elements of its environment and returning as an echo. They also gain information from ambient sounds. Comparing sounds to determine their origin is called the Doppler effect.
The Doppler effect works differently in human ears. A 1907 discovery showed that humans can find location by virtue of having two ears, receivers that relay sound data to the brain for processing. Operating on two or more receivers makes it possible to tell the direction of sounds that contain only one frequency, and would be familiar to anyone who has heard the sound of a car horn as it passes. The horn is one frequency, and the ears work together with the brain to build a map of where the car is going.
A 1967 discovery then showed that when the number of receivers is reduced down to one, a single human ear can find the location of sounds if different frequencies are encountered. In the case of the passing car, this might be the car horn paired with the roaring of the car's engine.
According to Mueller, the workings of the human ear have inspired past approaches to pinpointing sound location, which have used pressure receivers, such as microphones, paired with the ability to either collect multiple frequencies or use multiple receivers. Building on a career of research with bats, Mueller knew that their ears were much more versatile sound receivers than the human ear. This prompted his team to pursue the objective of a single frequency and a single receiver instead of multiple receivers or frequencies.
Creating the ear
As they worked from the one-receiver, one-frequency model, Mueller's team sought to replicate a bat's ability to move their ears.
They created a soft synthetic ear inspired by horseshoe and Old-World leaf-nosed bats and attached it to a string and a simple motor, timed to make the ear flutter at the same time it received an incoming sound. These particular bats have ears that enable a complex transformation of sound waves, so nature's ready-made design was a logical choice. That transformation starts with the shape of the outer ear, called the pinna, which uses the movement of the ear as it receives sounds to create multiple shapes for reception which channel the sounds into the ear canal. | Inspired by bats' ears, Virginia Polytechnic Institute and State University (Virginia Tech) 's Rolf Mueller and colleagues have developed a new technique for identifying the point of origin of a sound. The Virginia Tech researchers created a soft synthetic ear inspired by horseshoe and Old-World leaf-nosed bats, and affixed it to a string and a motor timed to make the ear flutter when receiving an incoming sound. A deep neural network was trained to interpret the incoming signals and provide the source direction associated with each received audio input. Mueller said, "Our hope is to bring reliable and capable autonomy to complex outdoor environments, including precision agriculture and forestry; environmental surveillance, such as biodiversity monitoring; as well as defense and security-related applications." | [] | [] | [] | scitechnews | None | None | None | None | Inspired by bats' ears, Virginia Polytechnic Institute and State University (Virginia Tech) 's Rolf Mueller and colleagues have developed a new technique for identifying the point of origin of a sound. The Virginia Tech researchers created a soft synthetic ear inspired by horseshoe and Old-World leaf-nosed bats, and affixed it to a string and a motor timed to make the ear flutter when receiving an incoming sound. A deep neural network was trained to interpret the incoming signals and provide the source direction associated with each received audio input. Mueller said, "Our hope is to bring reliable and capable autonomy to complex outdoor environments, including precision agriculture and forestry; environmental surveillance, such as biodiversity monitoring; as well as defense and security-related applications."
Inspired by the workings of a bat's ear, Rolf Mueller, a professor of mechanical engineering at Virginia Tech, has created bio-inspired technology that determines the location of a sound's origin.
Mueller's development works from a simpler and more accurate model of sound location than previous approaches, which have traditionally been modeled after the human ear. His work marks the first new insight for determining sound location in 50 years.
The findings were published in Nature Machine Intelligence by Mueller and a former Ph.D. student, lead author Xiaoyan Yin.
"I have long admired bats for their uncanny ability to navigate complex natural environments based on ultrasound and suspected that the unusual mobility of the animal's ears might have something to do with this," said Mueller.
A new model for sound location
Bats navigate as they fly by using echolocation, determining how close an object is by continuously emitting sounds and listening to the echoes. Ultrasonic calls are emitted from the bat's mouth or nose, bouncing off the elements of its environment and returning as an echo. They also gain information from ambient sounds. Comparing sounds to determine their origin is called the Doppler effect.
The Doppler effect works differently in human ears. A 1907 discovery showed that humans can find location by virtue of having two ears, receivers that relay sound data to the brain for processing. Operating on two or more receivers makes it possible to tell the direction of sounds that contain only one frequency, and would be familiar to anyone who has heard the sound of a car horn as it passes. The horn is one frequency, and the ears work together with the brain to build a map of where the car is going.
A 1967 discovery then showed that when the number of receivers is reduced down to one, a single human ear can find the location of sounds if different frequencies are encountered. In the case of the passing car, this might be the car horn paired with the roaring of the car's engine.
According to Mueller, the workings of the human ear have inspired past approaches to pinpointing sound location, which have used pressure receivers, such as microphones, paired with the ability to either collect multiple frequencies or use multiple receivers. Building on a career of research with bats, Mueller knew that their ears were much more versatile sound receivers than the human ear. This prompted his team to pursue the objective of a single frequency and a single receiver instead of multiple receivers or frequencies.
Creating the ear
As they worked from the one-receiver, one-frequency model, Mueller's team sought to replicate a bat's ability to move their ears.
They created a soft synthetic ear inspired by horseshoe and Old-World leaf-nosed bats and attached it to a string and a simple motor, timed to make the ear flutter at the same time it received an incoming sound. These particular bats have ears that enable a complex transformation of sound waves, so nature's ready-made design was a logical choice. That transformation starts with the shape of the outer ear, called the pinna, which uses the movement of the ear as it receives sounds to create multiple shapes for reception which channel the sounds into the ear canal. |
|||
571 | International Research Collaboration Solves Centuries-Old Puzzle of Pattern Formation in Flower Heads | If you're walking in a field of flowers this summer, look closely at the beautiful patterns in the flower heads. Figuring out how these distinctive and ubiquitous patterns form has puzzled scientists for centuries.
Now, an international team including University of Calgary researchers has solved the problem that stumped so many, including famed British mathematician, computer scientist and theoretical biologist Alan Turing.
The team's five-year study focused on "phyllotaxis," the distribution of organs such as leaves and flowers on their supporting structure, which is a key attribute of plant architecture.
The formation of spiral phyllotactic patterns has been an open fundamental problem in developmental plant biology for centuries, due to the patterns' role in defining plant form, says study co-author Dr. Przemyslaw Prusinkiewicz, PhD, professor in the Department of Computer Science in the Faculty of Science .
"We have cracked this problem," he says.
The collaboration involved Prusinkiewicz's UCalgary research group and, from the University of Helsinki in Finland, a group led by Prof. Paula Elomaa.
The team combined tools unavailable to Turing - diverse genetic, microscopy and computational modelling techniques - to explain how phyllotactic patterns of flowers emerge in the flower heads of Gerbera hybrida , a member of the daisy family which also includes sunflower.
They found that phyllotactic patterns in gerbera, whose heads have large numbers (in the order of 1,000) of individual flowers, develop in a different way than patterns in plants with small numbers of organs. Scientists had previously explained those patterns in experimental model plants, such as Arabidopsis and tomato.
But the team's discovery of a novel developmental mechanism brings several new elements to the "traditional" theory of phyllotaxis.
"The patterning is not occurring in a static, pre-formed head structure. It occurs concurrently with the growth of the structure - when the flower head develops - and this plays a major role," Prusinkiewicz says.
The team's study , "Phyllotactic Patterning of Gerbera Flower Heads," is published as an open access paper in the Proceedings of the National Academy of Sciences of the USA .
Prusinkiewicz and his research group, which included study co-authors Dr. Mikolaj Cieslak, PhD, a senior research associate, and PhD student Andrew Owens, developed mathematical models based on experimental data obtained by the group at the University of Helsinki.
"This was exemplary interdisciplinary research and international collaboration. Neither group could have obtained these results working alone," Prusinkiewicz notes.
The team found that phyllotactic patterns in gerbera are initiated by molecular processes, controlled by a plant hormone called auxin, taking place at the rim of the flower head. These processes occur long before any morphological (structural) changes can be seen.
As the flower head grows, new flowers are added between those previously formed as space becomes available, but they are shifted asymmetrically toward their older neighbouring flowers - producing a zigzag template for new flowers.
The team is the first to observe and report this asymmetry, and to show that it is key to the emergence of the phyllotactic patterns' mathematical properties. For instance, the flowers are arranged in left- and right-winding spirals that typically occur in "Fibonacci numbers," where each number is the sum of the preceding ones: 1, 2, 3, 5, 8, 13, 21, 34, 55 . . ..
In a remarkable intersection between mathematics and biology, Fibonacci numbers also appear in the arrangement of leaves on the stems of many plants, the fruit sprouts of a pineapple , the flowering of an artichoke , and the arrangement of a pine cone 's scales.
Phyllotactic patterns are prevalent only in the plant kingdom, unlike simpler spiral patterns which are also found in some animals. There is no scientific consensus on why.
Prusinkiewicz subscribes to a theory that, from a plant's perspective, it is easy and efficient to grow new flowers in this manner - as the space becomes available.
As a next step, the research team wants to see if the mechanism they discovered is present in a wide range of other plants. They also plan to use computational techniques to "look" inside the flower heads, to connect the phyllotactic patterns visible from the outside with patterns of the plant's inner vascular system. | Computer scientists at Canada's University of Calgary (UCalgary), along with researchers at Finland's University of Helsinki, have solved the riddle of the formation of patterns in flower heads using genetic, microscopy, and computational modeling methods. The authors examined the emergence of phyllotactic patterns in flower heads of | [] | [] | [] | scitechnews | None | None | None | None | Computer scientists at Canada's University of Calgary (UCalgary), along with researchers at Finland's University of Helsinki, have solved the riddle of the formation of patterns in flower heads using genetic, microscopy, and computational modeling methods. The authors examined the emergence of phyllotactic patterns in flower heads of
If you're walking in a field of flowers this summer, look closely at the beautiful patterns in the flower heads. Figuring out how these distinctive and ubiquitous patterns form has puzzled scientists for centuries.
Now, an international team including University of Calgary researchers has solved the problem that stumped so many, including famed British mathematician, computer scientist and theoretical biologist Alan Turing.
The team's five-year study focused on "phyllotaxis," the distribution of organs such as leaves and flowers on their supporting structure, which is a key attribute of plant architecture.
The formation of spiral phyllotactic patterns has been an open fundamental problem in developmental plant biology for centuries, due to the patterns' role in defining plant form, says study co-author Dr. Przemyslaw Prusinkiewicz, PhD, professor in the Department of Computer Science in the Faculty of Science .
"We have cracked this problem," he says.
The collaboration involved Prusinkiewicz's UCalgary research group and, from the University of Helsinki in Finland, a group led by Prof. Paula Elomaa.
The team combined tools unavailable to Turing - diverse genetic, microscopy and computational modelling techniques - to explain how phyllotactic patterns of flowers emerge in the flower heads of Gerbera hybrida , a member of the daisy family which also includes sunflower.
They found that phyllotactic patterns in gerbera, whose heads have large numbers (in the order of 1,000) of individual flowers, develop in a different way than patterns in plants with small numbers of organs. Scientists had previously explained those patterns in experimental model plants, such as Arabidopsis and tomato.
But the team's discovery of a novel developmental mechanism brings several new elements to the "traditional" theory of phyllotaxis.
"The patterning is not occurring in a static, pre-formed head structure. It occurs concurrently with the growth of the structure - when the flower head develops - and this plays a major role," Prusinkiewicz says.
The team's study , "Phyllotactic Patterning of Gerbera Flower Heads," is published as an open access paper in the Proceedings of the National Academy of Sciences of the USA .
Prusinkiewicz and his research group, which included study co-authors Dr. Mikolaj Cieslak, PhD, a senior research associate, and PhD student Andrew Owens, developed mathematical models based on experimental data obtained by the group at the University of Helsinki.
"This was exemplary interdisciplinary research and international collaboration. Neither group could have obtained these results working alone," Prusinkiewicz notes.
The team found that phyllotactic patterns in gerbera are initiated by molecular processes, controlled by a plant hormone called auxin, taking place at the rim of the flower head. These processes occur long before any morphological (structural) changes can be seen.
As the flower head grows, new flowers are added between those previously formed as space becomes available, but they are shifted asymmetrically toward their older neighbouring flowers - producing a zigzag template for new flowers.
The team is the first to observe and report this asymmetry, and to show that it is key to the emergence of the phyllotactic patterns' mathematical properties. For instance, the flowers are arranged in left- and right-winding spirals that typically occur in "Fibonacci numbers," where each number is the sum of the preceding ones: 1, 2, 3, 5, 8, 13, 21, 34, 55 . . ..
In a remarkable intersection between mathematics and biology, Fibonacci numbers also appear in the arrangement of leaves on the stems of many plants, the fruit sprouts of a pineapple , the flowering of an artichoke , and the arrangement of a pine cone 's scales.
Phyllotactic patterns are prevalent only in the plant kingdom, unlike simpler spiral patterns which are also found in some animals. There is no scientific consensus on why.
Prusinkiewicz subscribes to a theory that, from a plant's perspective, it is easy and efficient to grow new flowers in this manner - as the space becomes available.
As a next step, the research team wants to see if the mechanism they discovered is present in a wide range of other plants. They also plan to use computational techniques to "look" inside the flower heads, to connect the phyllotactic patterns visible from the outside with patterns of the plant's inner vascular system. |
|||
572 | 3D-Printed Material to Replace Ivory | For centuries, ivory was often used to make art objects. But to protect elephant populations, the ivory trade was banned internationally in 1989. To restore ivory parts of old art objects, one must therefore resort to substitute materials - such as bones, shells or plastic. However, there has not been a really satisfactory solution so far.
TU Wien (Vienna) and the 3D printing company Cubicure GmbH, created as a spin-off of TU Wien, have now developed a high-tech substitute in cooperation with the Archdiocese of Vienna's Department for the Care of Art and Monuments and Addison Restoration: the novel material "Digory" consists of synthetic resin and calcium phosphate particles. It is processed in a hot, liquid state and hardened in the 3D printer with UV rays, exactly in the desired shape. It can then be polished and colour-matched to create a deceptively authentic-looking ivory substitute.
Beautiful and Mechanically Stable
"The research project began with a valuable 17th-century state casket in the parish church of Mauerbach," says Prof. Jürgen Stampfl from the Institute of Materials Science and Technology at TU Wien. "It is decorated with small ivory ornaments, some of which have been lost over time. The question was whether they could be replaced with 3D printing technology."
The team already had experience with similar materials: the research group also works with ceramic materials for dental technology, for example. Nevertheless, it was a challenging task to develop a suitable substitute for ivory: "We had to fulfil a whole range of requirements at the same time," says Thaddäa Rath, who worked on the project as part of her dissertation. "The material should not only look like ivory, the strength and stiffness must also be right, and the material should be machinable."
Stereolithography in the 3D printer
Through numerous experiments, Thaddäa Rath and other members of the team from TU Wien and Cubicure succeeded in finding the right mixture: Tiny calcium phosphate particles with an average diameter of about 7 μm were embedded in a special resin, together with extremely fine silicon oxide powder. The mixture is then processed at high heat in Cubicure's 3D printers using the hot lithography process: Layer by layer, the material is cured with a UV laser until the complete object is finished.
"You also have to bear in mind that ivory is translucent," explains Thaddäa Rath. "Only if you use the right amount of calcium phosphate will the material have the same translucent properties as ivory." Afterwards, the colour of the object can be touched up - the team achieved good results with black tea. The characteristic dark lines that normally run through ivory can also be applied afterwards with high precision.
No more tusks!
In the field of restoration, this is a big step forward: With the new material "Digory," not only is a better, more beautiful and easier to work with substitute for ivory available than before, the 3D technology also makes it possible to reproduce the finest details automatically. Instead of painstakingly carving them out of ivory substitute material, objects can now be printed in a matter of hours.
"With our specially developed 3D printing systems, we process different material formulations for completely different areas of application, but this project was also something new for us," says Konstanze Seidler from Cubicure. "In any case, it is further proof of how diverse the possible applications of stereolithography are."
The team hopes that the new material "Digory" will become generally accepted in the future - as an aesthetically and mechanically high-quality ivory substitute, for which no elephant has to lose a tusk.
T. Rath et al., Developing an ivory-like material for stereolithography-based additive manufacturing, Applied Materials Today, 23, 101016 (2021). , opens an external URL in a new window
Prof. Jürgen Stampfl
Institut für Werkstoffwissenschaft und Werkstofftechnologie
TU Wien
+43 1 58801 30862 juergen.stampfl @ tuwien.ac.at
Dipl.-Ing. Thaddäa Rath
Institut für Werkstoffwissenschaft und Werkstofftechnologie
TU Wien
+43 1 58801 30857 thaddaea.rath @ tuwien.ac.at | A substitute for ivory has been engineered by researchers at Austria's Technical University of Wien (TU Wien) and three-dimensional (3D) printing spinoff Cubicure, in cooperation with the Archdiocese of Vienna's Department for the Care of Art and Monuments and Addison Restoration. "Digory" combines synthetic resin and calcium phosphate particles, processed in a hot, liquid state and cured layer by layer in a 3D printer with ultraviolet light. After printing, the object can be polished and color-matched to give the material an authentic ivory appearance. Cubicure's Konstanze Seidler said, "It is further proof of how diverse the possible applications of stereolithography are." | [] | [] | [] | scitechnews | None | None | None | None | A substitute for ivory has been engineered by researchers at Austria's Technical University of Wien (TU Wien) and three-dimensional (3D) printing spinoff Cubicure, in cooperation with the Archdiocese of Vienna's Department for the Care of Art and Monuments and Addison Restoration. "Digory" combines synthetic resin and calcium phosphate particles, processed in a hot, liquid state and cured layer by layer in a 3D printer with ultraviolet light. After printing, the object can be polished and color-matched to give the material an authentic ivory appearance. Cubicure's Konstanze Seidler said, "It is further proof of how diverse the possible applications of stereolithography are."
For centuries, ivory was often used to make art objects. But to protect elephant populations, the ivory trade was banned internationally in 1989. To restore ivory parts of old art objects, one must therefore resort to substitute materials - such as bones, shells or plastic. However, there has not been a really satisfactory solution so far.
TU Wien (Vienna) and the 3D printing company Cubicure GmbH, created as a spin-off of TU Wien, have now developed a high-tech substitute in cooperation with the Archdiocese of Vienna's Department for the Care of Art and Monuments and Addison Restoration: the novel material "Digory" consists of synthetic resin and calcium phosphate particles. It is processed in a hot, liquid state and hardened in the 3D printer with UV rays, exactly in the desired shape. It can then be polished and colour-matched to create a deceptively authentic-looking ivory substitute.
Beautiful and Mechanically Stable
"The research project began with a valuable 17th-century state casket in the parish church of Mauerbach," says Prof. Jürgen Stampfl from the Institute of Materials Science and Technology at TU Wien. "It is decorated with small ivory ornaments, some of which have been lost over time. The question was whether they could be replaced with 3D printing technology."
The team already had experience with similar materials: the research group also works with ceramic materials for dental technology, for example. Nevertheless, it was a challenging task to develop a suitable substitute for ivory: "We had to fulfil a whole range of requirements at the same time," says Thaddäa Rath, who worked on the project as part of her dissertation. "The material should not only look like ivory, the strength and stiffness must also be right, and the material should be machinable."
Stereolithography in the 3D printer
Through numerous experiments, Thaddäa Rath and other members of the team from TU Wien and Cubicure succeeded in finding the right mixture: Tiny calcium phosphate particles with an average diameter of about 7 μm were embedded in a special resin, together with extremely fine silicon oxide powder. The mixture is then processed at high heat in Cubicure's 3D printers using the hot lithography process: Layer by layer, the material is cured with a UV laser until the complete object is finished.
"You also have to bear in mind that ivory is translucent," explains Thaddäa Rath. "Only if you use the right amount of calcium phosphate will the material have the same translucent properties as ivory." Afterwards, the colour of the object can be touched up - the team achieved good results with black tea. The characteristic dark lines that normally run through ivory can also be applied afterwards with high precision.
No more tusks!
In the field of restoration, this is a big step forward: With the new material "Digory," not only is a better, more beautiful and easier to work with substitute for ivory available than before, the 3D technology also makes it possible to reproduce the finest details automatically. Instead of painstakingly carving them out of ivory substitute material, objects can now be printed in a matter of hours.
"With our specially developed 3D printing systems, we process different material formulations for completely different areas of application, but this project was also something new for us," says Konstanze Seidler from Cubicure. "In any case, it is further proof of how diverse the possible applications of stereolithography are."
The team hopes that the new material "Digory" will become generally accepted in the future - as an aesthetically and mechanically high-quality ivory substitute, for which no elephant has to lose a tusk.
T. Rath et al., Developing an ivory-like material for stereolithography-based additive manufacturing, Applied Materials Today, 23, 101016 (2021). , opens an external URL in a new window
Prof. Jürgen Stampfl
Institut für Werkstoffwissenschaft und Werkstofftechnologie
TU Wien
+43 1 58801 30862 juergen.stampfl @ tuwien.ac.at
Dipl.-Ing. Thaddäa Rath
Institut für Werkstoffwissenschaft und Werkstofftechnologie
TU Wien
+43 1 58801 30857 thaddaea.rath @ tuwien.ac.at |
|||
573 | Pandemic is Pushing Robots into Retail at Unprecedented Pace | A survey by retail news and analysis firm RetailWire and commercial robotics company Brain Corp. indicates the Covid-19 pandemic has ramped up development and adoption of automation. The poll estimated that 64% of retailers consider it important to have a clear, executable, and budgeted robotics automation strategy in place this year; almost half plan to participate in an in-store robotics project in the next 18 months. Brain Corp.'s Josh Baylin said, "The global pandemic brought the value of robotic automation sharply into focus for many retailers, and we now see them accelerating their deployment timelines to reap the advantages now and into the future." Heightened focus on cleanliness is one of the key drivers of adoption. | [] | [] | [] | scitechnews | None | None | None | None | A survey by retail news and analysis firm RetailWire and commercial robotics company Brain Corp. indicates the Covid-19 pandemic has ramped up development and adoption of automation. The poll estimated that 64% of retailers consider it important to have a clear, executable, and budgeted robotics automation strategy in place this year; almost half plan to participate in an in-store robotics project in the next 18 months. Brain Corp.'s Josh Baylin said, "The global pandemic brought the value of robotic automation sharply into focus for many retailers, and we now see them accelerating their deployment timelines to reap the advantages now and into the future." Heightened focus on cleanliness is one of the key drivers of adoption.
|
||||
575 | Vibrations From a Smartphone Can Help Spot Unsafe Drinking Water | By Matthew Sparkes
A smartphone's motion sensor could help identify contaminated water dcphoto / Alamy
The vibrations from an iPhone's ringtone can be used to measure the viscosity of a liquid, which could allow it to detect whether water is polluted or to test for kidney conditions and pregnancy by measuring the levels of protein or hormones in urine.
Yandao Huang at Shenzhen University in China and his colleagues built a 3D-printed cup with a mount on the outside designed to securely hold an iPhone 7. They then used the phone's vibrating motor to agitate dozens ... | Vibrations from a smartphone's ringtone can measure a liquid's viscosity, according to researchers at China's Shenzhen University. Shenzhen's Yandao Huang and colleagues designed a three-dimensionally-printed drinking cup with an external mount for an iPhone 7, and used the handset's vibrating motor to agitate liquids within the cup; the handset's built-in motion sensor quantified the friction between the liquid molecules by detecting reflected motion waves. The team could differentiate between 30 types of liquid with more than 95% average accuracy. The phone could distinguish between liquids containing bacteria, dirt, or minerals through changes to viscosity, and differentiated between tap water, rain water, puddle water, and water with prolonged exposure to air, with an error rate of just 2.5%. Huang said the study's results could lead to a simple test for measuring the safety of drinking water. | [] | [] | [] | scitechnews | None | None | None | None | Vibrations from a smartphone's ringtone can measure a liquid's viscosity, according to researchers at China's Shenzhen University. Shenzhen's Yandao Huang and colleagues designed a three-dimensionally-printed drinking cup with an external mount for an iPhone 7, and used the handset's vibrating motor to agitate liquids within the cup; the handset's built-in motion sensor quantified the friction between the liquid molecules by detecting reflected motion waves. The team could differentiate between 30 types of liquid with more than 95% average accuracy. The phone could distinguish between liquids containing bacteria, dirt, or minerals through changes to viscosity, and differentiated between tap water, rain water, puddle water, and water with prolonged exposure to air, with an error rate of just 2.5%. Huang said the study's results could lead to a simple test for measuring the safety of drinking water.
By Matthew Sparkes
A smartphone's motion sensor could help identify contaminated water dcphoto / Alamy
The vibrations from an iPhone's ringtone can be used to measure the viscosity of a liquid, which could allow it to detect whether water is polluted or to test for kidney conditions and pregnancy by measuring the levels of protein or hormones in urine.
Yandao Huang at Shenzhen University in China and his colleagues built a 3D-printed cup with a mount on the outside designed to securely hold an iPhone 7. They then used the phone's vibrating motor to agitate dozens ... |
|||
576 | Is 'Femtech' the Next Big Thing in Healthcare? | This article is part of our new series on the Future of Health Care , which examines changes in the medical field.
Women represent half of the planet's population. Yet tech companies catering to their specific health needs represent a minute share of the global technology market.
In 2019, the "femtech" industry - software and technology companies addressing women's biological needs - generated $820.6 million in global revenue and received $592 million in venture capital investment, according to PitchBook, a financial data and research company. That same year, the ride-sharing app Uber alone raised $8.1 billion in an initial public offering . The difference in scale is staggering, especially when women spend an estimated $500 billion a year on medical expenses, according to PitchBook.
Tapping into that spending power, a multitude of apps and tech companies have sprung up in the last decade to address women's needs, including tracking menstruation and fertility , and offering solutions for pregnancy, breastfeeding and menopause. Medical start-ups also have stepped in to prevent or manage serious conditions such as cancer. | Numerous apps and technology companies are emerging in the "femtech" industry to address women's biological needs. In addition to period- and fertility-tracking apps, some tech companies have rolled out wearable breast pumps and pelvic exercise apps. Others focus on "menotech," or technologies that help women going through menopause, or on cervical, breast, and other cancers that affect women. Israeli startup MobileODT has developed a smart imaging device that leverages smartphones and artificial intelligence to screen for cervical cancer, delivering diagnoses in about a minute. French startup Lattice Medical has created a three-dimensionally (3D) printed hollow breast implant, which permits the regeneration of tissue and eventually is absorbed by the body. With women spending about $500 billion annually on medical expenses according to PitchBook, there is significant potential in this market. | [] | [] | [] | scitechnews | None | None | None | None | Numerous apps and technology companies are emerging in the "femtech" industry to address women's biological needs. In addition to period- and fertility-tracking apps, some tech companies have rolled out wearable breast pumps and pelvic exercise apps. Others focus on "menotech," or technologies that help women going through menopause, or on cervical, breast, and other cancers that affect women. Israeli startup MobileODT has developed a smart imaging device that leverages smartphones and artificial intelligence to screen for cervical cancer, delivering diagnoses in about a minute. French startup Lattice Medical has created a three-dimensionally (3D) printed hollow breast implant, which permits the regeneration of tissue and eventually is absorbed by the body. With women spending about $500 billion annually on medical expenses according to PitchBook, there is significant potential in this market.
This article is part of our new series on the Future of Health Care , which examines changes in the medical field.
Women represent half of the planet's population. Yet tech companies catering to their specific health needs represent a minute share of the global technology market.
In 2019, the "femtech" industry - software and technology companies addressing women's biological needs - generated $820.6 million in global revenue and received $592 million in venture capital investment, according to PitchBook, a financial data and research company. That same year, the ride-sharing app Uber alone raised $8.1 billion in an initial public offering . The difference in scale is staggering, especially when women spend an estimated $500 billion a year on medical expenses, according to PitchBook.
Tapping into that spending power, a multitude of apps and tech companies have sprung up in the last decade to address women's needs, including tracking menstruation and fertility , and offering solutions for pregnancy, breastfeeding and menopause. Medical start-ups also have stepped in to prevent or manage serious conditions such as cancer. |
|||
578 | Some FDA-Approved AI Medical Devices Are Not 'Adequately' Evaluated, Stanford Study Says | Some AI-powered medical devices approved by the U.S. Food and Drug Administration (FDA) are vulnerable to data shifts and bias against underrepresented patients. That's according to a Stanford study published in Nature Medicine last week, which found that even as AI becomes embedded in more medical devices - the FDA approved over 65 AI devices last year - the accuracy of these algorithms isn't necessarily being rigorously studied.
Although the academic community has begun developing guidelines for AI clinical trials, there aren't established practices for evaluating commercial algorithms. In the U.S., the FDA is responsible for approving AI-powered medical devices, and the agency regularly releases information on these devices including performance data.
The coauthors of the Stanford research created a database of FDA-approved medical AI devices and analyzed how each was tested before it gained approval. Almost all of the AI-powered devices - 126 out of 130 - approved by the FDA between January 2015 and December 2020 underwent only retrospective studies at their submission, according to the researchers. And none of the 54 approved high-risk devices were evaluated by prospective studies, meaning test data was collected before the devices were approved rather than concurrent with their deployment.
The coauthors argue that prospective studies are necessary, particularly for AI medical devices, because in-the-field usage can deviate from the intended use. For example, most computer-aided diagnostic devices are designed to be decision-support tools rather than primary diagnostic tools. A prospective study might reveal that clinicians are misusing a device for diagnosis, leading to outcomes that differ from what would be expected.
There's evidence to suggest that these deviations can lead to errors. Tracking by the Pennsylvania Patient Safety Authority in Harrisburg found that from January 2016 to December 2017, EHR systems were responsible for 775 problems during laboratory testing in the state, with human-computer interactions responsible for 54.7% of events and the remaining 45.3% caused by a computer. Furthermore, a draft U.S. government report issued in 2018 found that clinicians not uncommonly miss alerts - some AI-informed - ranging from minor issues about drug interactions to those that pose considerable risks.
The Stanford researchers also found a lack of patient diversity in the tests conducted on FDA-approved devices. Among the 130 devices, 93 didn't undergo a multisite assessment, while 4 were tested at only one site and 8 devices in only two sites. And the reports for 59 devices didn't mention the sample size of the studies. Of the 71 device studies that had this information, the median size was 300, and just 17 device studies considered how the algorithm might perform on different patient groups.
Partly due to a reticence to release code, datasets, and techniques, much of the data used today to train AI algorithms for diagnosing diseases might perpetuate inequalities, previous studies have shown. A team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. In another study , researchers from the University of Toronto, the Vector Institute, and MIT showed that widely used chest X-ray datasets encode racial, gender, and socioeconomic bias.
Beyond basic dataset challenges, models lacking sufficient peer review can encounter unforeseen roadblocks when deployed in the real world. Scientists at Harvard found that algorithms trained to recognize and classify CT scans could become biased toward scan formats from certain CT machine manufacturers. Meanwhile, a Google-published whitepaper revealed challenges in implementing an eye disease-predicting system in Thailand hospitals, including issues with scan accuracy. And studies conducted by companies like Babylon Health , a well-funded telemedicine startup that claims to be able to triage a range of diseases from text messages, have been repeatedly called into question.
The coauthors of the Stanford study argue that information about the number of sites in an evaluation must be "consistently reported" in order for clinicians, researchers, and patients to make informed judgments about the reliability of a given AI medical device. Multisite evaluations are important for understanding algorithmic bias and reliability, they say, and can help in accounting for variations in equipment, technician standards, image storage formats, demographic makeup, and disease prevalence.
"Evaluating the performance of AI devices in multiple clinical sites is important for ensuring that the algorithms perform well across representative populations," the coauthors wrote. "Encouraging prospective studies with comparison to standard of care reduces the risk of harmful overfitting and more accurately captures true clinical outcomes. Postmarket surveillance of AI devices is also needed for understanding and measurement of unintended outcomes and biases that are not detected in prospective, multicenter trial." | Certain artificial intelligence (AI) -powered medical devices approved by the U.S. Food and Drug Administration (FDA) are susceptible to data shifts and bias against underrepresented patients, according to a study by Stanford University researchers. The researchers compiled a database of FDA-approved medical AI devices, and analyzed how each was evaluated before approval. They found 126 of 130 devices approved between January 2015 and December 2020 underwent only retrospective studies at submission, and none of the 54 approved high-risk devices were assessed via prospective review. The researchers contend prospective studies are needed especially for AI medical devices, given that field applications of the devices can deviate from their intended uses. They also said data about the number of sites used in an evaluation must be "consistently reported," in order for doctors, researchers, and patients to make informed decisions about the reliability of a AI-powered medical device. | [] | [] | [] | scitechnews | None | None | None | None | Certain artificial intelligence (AI) -powered medical devices approved by the U.S. Food and Drug Administration (FDA) are susceptible to data shifts and bias against underrepresented patients, according to a study by Stanford University researchers. The researchers compiled a database of FDA-approved medical AI devices, and analyzed how each was evaluated before approval. They found 126 of 130 devices approved between January 2015 and December 2020 underwent only retrospective studies at submission, and none of the 54 approved high-risk devices were assessed via prospective review. The researchers contend prospective studies are needed especially for AI medical devices, given that field applications of the devices can deviate from their intended uses. They also said data about the number of sites used in an evaluation must be "consistently reported," in order for doctors, researchers, and patients to make informed decisions about the reliability of a AI-powered medical device.
Some AI-powered medical devices approved by the U.S. Food and Drug Administration (FDA) are vulnerable to data shifts and bias against underrepresented patients. That's according to a Stanford study published in Nature Medicine last week, which found that even as AI becomes embedded in more medical devices - the FDA approved over 65 AI devices last year - the accuracy of these algorithms isn't necessarily being rigorously studied.
Although the academic community has begun developing guidelines for AI clinical trials, there aren't established practices for evaluating commercial algorithms. In the U.S., the FDA is responsible for approving AI-powered medical devices, and the agency regularly releases information on these devices including performance data.
The coauthors of the Stanford research created a database of FDA-approved medical AI devices and analyzed how each was tested before it gained approval. Almost all of the AI-powered devices - 126 out of 130 - approved by the FDA between January 2015 and December 2020 underwent only retrospective studies at their submission, according to the researchers. And none of the 54 approved high-risk devices were evaluated by prospective studies, meaning test data was collected before the devices were approved rather than concurrent with their deployment.
The coauthors argue that prospective studies are necessary, particularly for AI medical devices, because in-the-field usage can deviate from the intended use. For example, most computer-aided diagnostic devices are designed to be decision-support tools rather than primary diagnostic tools. A prospective study might reveal that clinicians are misusing a device for diagnosis, leading to outcomes that differ from what would be expected.
There's evidence to suggest that these deviations can lead to errors. Tracking by the Pennsylvania Patient Safety Authority in Harrisburg found that from January 2016 to December 2017, EHR systems were responsible for 775 problems during laboratory testing in the state, with human-computer interactions responsible for 54.7% of events and the remaining 45.3% caused by a computer. Furthermore, a draft U.S. government report issued in 2018 found that clinicians not uncommonly miss alerts - some AI-informed - ranging from minor issues about drug interactions to those that pose considerable risks.
The Stanford researchers also found a lack of patient diversity in the tests conducted on FDA-approved devices. Among the 130 devices, 93 didn't undergo a multisite assessment, while 4 were tested at only one site and 8 devices in only two sites. And the reports for 59 devices didn't mention the sample size of the studies. Of the 71 device studies that had this information, the median size was 300, and just 17 device studies considered how the algorithm might perform on different patient groups.
Partly due to a reticence to release code, datasets, and techniques, much of the data used today to train AI algorithms for diagnosing diseases might perpetuate inequalities, previous studies have shown. A team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. In another study , researchers from the University of Toronto, the Vector Institute, and MIT showed that widely used chest X-ray datasets encode racial, gender, and socioeconomic bias.
Beyond basic dataset challenges, models lacking sufficient peer review can encounter unforeseen roadblocks when deployed in the real world. Scientists at Harvard found that algorithms trained to recognize and classify CT scans could become biased toward scan formats from certain CT machine manufacturers. Meanwhile, a Google-published whitepaper revealed challenges in implementing an eye disease-predicting system in Thailand hospitals, including issues with scan accuracy. And studies conducted by companies like Babylon Health , a well-funded telemedicine startup that claims to be able to triage a range of diseases from text messages, have been repeatedly called into question.
The coauthors of the Stanford study argue that information about the number of sites in an evaluation must be "consistently reported" in order for clinicians, researchers, and patients to make informed judgments about the reliability of a given AI medical device. Multisite evaluations are important for understanding algorithmic bias and reliability, they say, and can help in accounting for variations in equipment, technician standards, image storage formats, demographic makeup, and disease prevalence.
"Evaluating the performance of AI devices in multiple clinical sites is important for ensuring that the algorithms perform well across representative populations," the coauthors wrote. "Encouraging prospective studies with comparison to standard of care reduces the risk of harmful overfitting and more accurately captures true clinical outcomes. Postmarket surveillance of AI devices is also needed for understanding and measurement of unintended outcomes and biases that are not detected in prospective, multicenter trial." |
|||
579 | Low-Cost NIST Demo Links Public Safety Radios to Broadband Wireless Network | Engineers at the National Institute of Standards and Technology (NIST) have built a low-cost computer system that connects older public safety radios with the latest wireless communications networks, showing how first responders might easily take advantage of broadband technology offering voice, text, instant messages, video and data capabilities.
NIST's prototype system could help overcome a major barrier to upgrading public safety communications. Many of the 4.6 million U.S. public safety personnel still use traditional analog radios, due to the high cost of switching to digital cellphones and these systems' slow incorporation of older "push to talk" features that are both familiar and critical to first responders.
"This NIST project aims to develop a prototype infrastructure that could be used by commercial entities to create a low-cost solution for public safety users, allowing them to interconnect their radio systems to broadband networks," NIST engineer Jordan O'Dell said.
"There isn't a commercial option that compares to what we are developing. The goal here is to create a prototype and accelerate technology development in industry that will fill a significant gap."
The NIST prototype connects analog Land Mobile Radio (LMR) handsets and towers with a Long-Term Evolution (LTE) - the most widespread wireless standard - server that handles operations inside a broadband network. The LTE system is known as Mission Critical Push-to-Talk, which refers to essential aspects of public safety radios such as high availability and reliability, speaker identification, emergency calling and clear audio quality.
As described in a recent report , the NIST system has three main parts:
NIST's design goals included robustness, low cost and close conformance to existing and future standards. The physical equipment includes computer hardware that runs all three components, suitable software and an antenna. The computer must have an internet connection to the LTE system. The entire setup is about the size of a video game console plus a laptop or desktop computer.
The NIST system costs less than existing industry and government efforts to bridge radio and cellphone networks. One such activity requires a radio system that supports the Project 25 Inter-Radio Frequency Subsystem Interface, which few public safety agencies have or can afford to buy or retrofit. Another effort to connect existing radio handsets to a "box" that bridges into the broadband network requires dedicated "donor" radios and interfaces, also expensive.
"We want public safety agencies to have a very inexpensive option that can interface with old technology when the other options are out of reach," O'Dell said.
NIST researchers are continuing to work on the prototype, with plans to improve the interface to the broadband network and link to additional types of radios. To promote technology transfer, they intend to publicly release all capabilities on an open source basis for use by anyone.
This work was made possible by the Public Safety Trust Fund , which provides funding to organizations across NIST leveraging NIST expertise in communications, cybersecurity, manufacturing and sensors for research on critical, lifesaving technologies for first responders.
Report: Christopher Walton and Chic O'Dell. Bridging Analog Land Mobile Radio to LTE Mission Critical Push-to-Talk Communications. NISTIR 8338 . December 2020. | A low-cost prototype computer system developed by U.S. National Institute of Standards and Technology (NIST) engineers can connect older analog public safety radios to modern broadband networks. The prototype links analog Land Mobile Radio (LMR) handsets and towers to a Long-Term Evolution (LTE) server, allowing LMR radio users and LTE network users to communicate as though they both are on the same push-to-talk network. The system integrates software-defined radio, an open-source software environment for managing software radio, and a user interface for LTE handsets. NIST's Jordan O'Dell said, "There isn't a commercial option that compares to what we are developing. The goal here is to create a prototype and accelerate technology development in industry that will fill a significant gap." | [] | [] | [] | scitechnews | None | None | None | None | A low-cost prototype computer system developed by U.S. National Institute of Standards and Technology (NIST) engineers can connect older analog public safety radios to modern broadband networks. The prototype links analog Land Mobile Radio (LMR) handsets and towers to a Long-Term Evolution (LTE) server, allowing LMR radio users and LTE network users to communicate as though they both are on the same push-to-talk network. The system integrates software-defined radio, an open-source software environment for managing software radio, and a user interface for LTE handsets. NIST's Jordan O'Dell said, "There isn't a commercial option that compares to what we are developing. The goal here is to create a prototype and accelerate technology development in industry that will fill a significant gap."
Engineers at the National Institute of Standards and Technology (NIST) have built a low-cost computer system that connects older public safety radios with the latest wireless communications networks, showing how first responders might easily take advantage of broadband technology offering voice, text, instant messages, video and data capabilities.
NIST's prototype system could help overcome a major barrier to upgrading public safety communications. Many of the 4.6 million U.S. public safety personnel still use traditional analog radios, due to the high cost of switching to digital cellphones and these systems' slow incorporation of older "push to talk" features that are both familiar and critical to first responders.
"This NIST project aims to develop a prototype infrastructure that could be used by commercial entities to create a low-cost solution for public safety users, allowing them to interconnect their radio systems to broadband networks," NIST engineer Jordan O'Dell said.
"There isn't a commercial option that compares to what we are developing. The goal here is to create a prototype and accelerate technology development in industry that will fill a significant gap."
The NIST prototype connects analog Land Mobile Radio (LMR) handsets and towers with a Long-Term Evolution (LTE) - the most widespread wireless standard - server that handles operations inside a broadband network. The LTE system is known as Mission Critical Push-to-Talk, which refers to essential aspects of public safety radios such as high availability and reliability, speaker identification, emergency calling and clear audio quality.
As described in a recent report , the NIST system has three main parts:
NIST's design goals included robustness, low cost and close conformance to existing and future standards. The physical equipment includes computer hardware that runs all three components, suitable software and an antenna. The computer must have an internet connection to the LTE system. The entire setup is about the size of a video game console plus a laptop or desktop computer.
The NIST system costs less than existing industry and government efforts to bridge radio and cellphone networks. One such activity requires a radio system that supports the Project 25 Inter-Radio Frequency Subsystem Interface, which few public safety agencies have or can afford to buy or retrofit. Another effort to connect existing radio handsets to a "box" that bridges into the broadband network requires dedicated "donor" radios and interfaces, also expensive.
"We want public safety agencies to have a very inexpensive option that can interface with old technology when the other options are out of reach," O'Dell said.
NIST researchers are continuing to work on the prototype, with plans to improve the interface to the broadband network and link to additional types of radios. To promote technology transfer, they intend to publicly release all capabilities on an open source basis for use by anyone.
This work was made possible by the Public Safety Trust Fund , which provides funding to organizations across NIST leveraging NIST expertise in communications, cybersecurity, manufacturing and sensors for research on critical, lifesaving technologies for first responders.
Report: Christopher Walton and Chic O'Dell. Bridging Analog Land Mobile Radio to LTE Mission Critical Push-to-Talk Communications. NISTIR 8338 . December 2020. |
|||
580 | KAUST Collaboration With Intel, Microsoft, University of Washington Accelerates Training in ML Models | April 12, 2021 - Inserting lightweight optimization code in high-speed network devices has enabled a KAUST-led collaboration to increase the speed of machine learning on parallelized computing systems five-fold.
This "in-network aggregation" technology, developed with researchers and systems architects at Intel, Microsoft and the University of Washington, can provide dramatic speed improvements using readily available programmable network hardware.
The fundamental benefit of artificial intelligence (AI) that gives it so much power to "understand" and interact with the world is the machine-learning step, in which the model is trained using large sets of labeled training data. The more data the AI is trained on, the better the model is likely to perform when exposed to new inputs.
The recent burst of AI applications is largely due to better machine learning and the use of larger models and more diverse datasets. Performing the machine-learning computations, however, is an enormously taxing task that increasingly relies on large arrays of computers running the learning algorithm in parallel.
âHow to train deep-learning models at a large scale is a very challenging problem,â says Marco Canini from the KAUST research team. âThe AI models can consist of billions of parameters, and we can use hundreds of processors that need to work efficiently in parallel. In such systems, communication among processors during incremental model updates easily becomes a major performance bottleneck.â
The team found a potential solution in new network technology developed by Barefoot Networks, a division of Intel.
âWe use Barefoot Networksâ new programmable dataplane networking hardware to offload part of the work performed during distributed machine-learning training,â explains Amedeo Sapio, a KAUST alumnus who has since joined the Barefoot Networks team at Intel. âUsing this new programmable networking hardware, rather than just the network, to move data means that we can perform computations along the network paths.â
The key innovation of the teamâs SwitchML platform is to allow the network hardware to perform the data aggregation task at each synchronization step during the model update phase of the machine-learning process. Not only does this offload part of the computational load, it also significantly reduces the amount of data transmission.
âAlthough the programmable switch dataplane can do operations very quickly, the operations it can do are limited,â says Canini. âSo our solution had to be simple enough for the hardware and yet flexible enough to solve challenges such as limited onboard memory capacity. SwitchML addresses this challenge by co-designing the communication network and the distributed training algorithm, achieving an acceleration of up to 5.5 times compared to the state-of-the-art approach.â
Sapio, A., Canini, M., Ho, C.-Y., Nelson, J., Kalnis, P., Kim, C., Krishnamurthy, A., Moshref, M., Ports, D.R.K., Richtarik, P. Scaling distributed machine learning with in-network aggregation . The 18th USENIX Symposium on Networked Systems Design and Implementation (NSDI '21), Apr 2021.
Source: Marco Canini, KAUST Discovery | Researchers at Saudi Arabia's King Abdullah University of Science and Technology (KAUST), Intel, Microsoft, and University of Washington have achieved a more than five-fold increase in the speed of machine learning on parallelized computing systems. Their "in-network aggregation" technology involved inserting lightweight optimization code in high-speed network devices. The researchers used new programmable dataplane networking hardware developed by Intel's Barefoot Networks to offload part of the computational load during distributed machine learning training. The new SwitchML platform enables the network hardware to perform data aggregation at each synchronization step during the model update phase. KAUST's Marco Canini said, "Our solution had to be simple enough for the hardware and yet flexible enough to solve challenges such as limited onboard memory capacity." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Saudi Arabia's King Abdullah University of Science and Technology (KAUST), Intel, Microsoft, and University of Washington have achieved a more than five-fold increase in the speed of machine learning on parallelized computing systems. Their "in-network aggregation" technology involved inserting lightweight optimization code in high-speed network devices. The researchers used new programmable dataplane networking hardware developed by Intel's Barefoot Networks to offload part of the computational load during distributed machine learning training. The new SwitchML platform enables the network hardware to perform data aggregation at each synchronization step during the model update phase. KAUST's Marco Canini said, "Our solution had to be simple enough for the hardware and yet flexible enough to solve challenges such as limited onboard memory capacity."
April 12, 2021 - Inserting lightweight optimization code in high-speed network devices has enabled a KAUST-led collaboration to increase the speed of machine learning on parallelized computing systems five-fold.
This "in-network aggregation" technology, developed with researchers and systems architects at Intel, Microsoft and the University of Washington, can provide dramatic speed improvements using readily available programmable network hardware.
The fundamental benefit of artificial intelligence (AI) that gives it so much power to "understand" and interact with the world is the machine-learning step, in which the model is trained using large sets of labeled training data. The more data the AI is trained on, the better the model is likely to perform when exposed to new inputs.
The recent burst of AI applications is largely due to better machine learning and the use of larger models and more diverse datasets. Performing the machine-learning computations, however, is an enormously taxing task that increasingly relies on large arrays of computers running the learning algorithm in parallel.
âHow to train deep-learning models at a large scale is a very challenging problem,â says Marco Canini from the KAUST research team. âThe AI models can consist of billions of parameters, and we can use hundreds of processors that need to work efficiently in parallel. In such systems, communication among processors during incremental model updates easily becomes a major performance bottleneck.â
The team found a potential solution in new network technology developed by Barefoot Networks, a division of Intel.
âWe use Barefoot Networksâ new programmable dataplane networking hardware to offload part of the work performed during distributed machine-learning training,â explains Amedeo Sapio, a KAUST alumnus who has since joined the Barefoot Networks team at Intel. âUsing this new programmable networking hardware, rather than just the network, to move data means that we can perform computations along the network paths.â
The key innovation of the teamâs SwitchML platform is to allow the network hardware to perform the data aggregation task at each synchronization step during the model update phase of the machine-learning process. Not only does this offload part of the computational load, it also significantly reduces the amount of data transmission.
âAlthough the programmable switch dataplane can do operations very quickly, the operations it can do are limited,â says Canini. âSo our solution had to be simple enough for the hardware and yet flexible enough to solve challenges such as limited onboard memory capacity. SwitchML addresses this challenge by co-designing the communication network and the distributed training algorithm, achieving an acceleration of up to 5.5 times compared to the state-of-the-art approach.â
Sapio, A., Canini, M., Ho, C.-Y., Nelson, J., Kalnis, P., Kim, C., Krishnamurthy, A., Moshref, M., Ports, D.R.K., Richtarik, P. Scaling distributed machine learning with in-network aggregation . The 18th USENIX Symposium on Networked Systems Design and Implementation (NSDI '21), Apr 2021.
Source: Marco Canini, KAUST Discovery |
|||
581 | Researchers Identify Indicators for Audience Measurement Model for Streaming Platforms | In recent years the boom in streaming platforms and video on demand services has led to disruption in audiences , representing a difficulty when measuring the number of viewers of the content distributed by these platforms.
This new situation has not only altered the traditional television and film viewing model , but also has impacted the advertising market , which is a fundamental factor in funding and the business of audiovisual entertainment.
In this context, real and objective audience measurement (which is not influenced by the interests of the platforms) has become a key objective; it is fundamental to obtain real-time data on the reach of each production released so as to analyse its performance, know its market position, meet user demands and develop profitable services.
A recent study performed by researchers from the Universitat Oberta de Catalunya (UOC) analysed audience behaviour and measurement systems on the Netflix streaming platform and video on demand service. Their aim was to establish a more reliable audience measurement model.
"The audience has been the main financial driving force of television while advertising has been its main source of income, and therefore, for an evolving audiovisual sector, it is crucial to have accurate viewer and user numbers," explained Elena Neira, a researcher from the GAME group of the UOC Faculty of Information and Communication Sciences and the main author of the study.
New consumer habits influence audience measurement
The proliferation of streaming platforms and video on demand services has exponentially increased the quantity of content offered to users, leading them to change the way in which they watch series and films.
These new consumer habits have generated a new TV and video ecosystem which, among other factors, stems from a wider variety of devices on which people can view content, such as Smart TVs, smartphones, computers or tablets .
"At present, viewers can decide how, where and when to watch a series or a film, and therefore the traditional audience measurement models are not capable of covering the new consumer reality fully . Indeed, the idea of an audience in the sphere of streaming goes far beyond and is much more complex than the simple accumulation of viewings ," said Neira, who also stressed that, at present, nobody knows the market share or average use of the platforms, or how many people have abandoned traditional television because they can watch content online.
"Our objective is to offer a starting point and to study in depth the real market share of streaming in the framework of the system's structure. We also want to offer some certainty and information that is of value to everyone but in particular for the television companies and the creators," the UOC researcher underlined.
To analyse this new TV and video ecosystem, the experts chose to assess the production Money Heist , since it allowed them to measure the success and popularity of the series through different channels such as traditional television and a streaming and video on demand platform, Netflix.
Since being included among the content offered by Netflix , this Spanish-made production has become a worldwide phenomenon , for which there are no specific audience data.
New parameters for reliable audience measurement
The researchers from the UOC indicate that the concept of audience has been altered in this new TV and video ecosystem . This is due to the evolution of its parameters , which now include new metrics such as audience retention or the popularity of the content , which are difficult to standardize for measurement.
It is thought that, in order to be able to carry out better audience measurements, factors should be taken into account such as audience fragmentation , the need to weight the data collected, giving importance to variables such as viewing intensity - the famous binge watching - or the volatility of the streaming platform's users . In this respect, the researcher Elena Neira stressed that "we must include new dimensions , since the new concept of audience includes aspects that are especially relevant such as the users' commitment to or involvement with the content and the depth of attention of each viewer."
The heterogeneity intrinsic in the business model of these platforms introduces elements which greatly hinder the construction of a standard and global audience concept. For example, unlike traditional television channels, there is not a level playing field for streaming platforms as regards household consumption and penetration, market share and availability.
All this leads to audience measurement distortions , which may be more biased on taking into account the lifecycle of the content on a streaming platform, since this will be a significant factor determining the number of viewers.
Other external factors which may influence reliable audience measurement are the impact achieved on social media , the number of downloads or the number of searches carried out on search engines like Google .
"The use of streaming platforms is a mainstream activity ; they are gaining more and more hours of the population's entertainment time. This not only affects the sector, but also has legislative implications, since these business models do not have the same regulations as traditional television companies and should have certain obligations to be able to determine the size of their contribution to the state coffers," said Neira, who warned about how the platforms only tend to provide overall audience data, without figures that are specific to each territory.
This UOC research supports sustainable development goal (SDG) 8, decent work and economic growth, and industry, innovation and infrastructure .
Reference article
Neira, Elena; Clares-Gavilán, Judith; Sánchez-Navarro, Jordi (2021). "New audience dimensions in streaming platforms: the second life of Money heist on Netflix as a case study." Profesional de la información, v. 30, n. 1, e300113. https://doi.org/10.3145/epi.2021.ene.13
UOC R&I
The UOC's research and innovation (R&I) is helping overcome pressing challenges faced by global societies in the 21st century, by studying interactions between technology and human & social sciences with a specific focus on the network society , e-learning and e-health . Over 500 researchers and 51 research groups work among the University's seven faculties and two research centres: the Internet Interdisciplinary Institute ( IN3 ) and the eHealth Center ( eHC ).
The United Nations' 2030 Agenda for Sustainable Development and open knowledge serve as strategic pillars for the UOC's teaching, research and innovation. More information: research.uoc.edu . #UOC25years | A study by researchers at Spain's Universitat Oberta de Catalunya (UOC) analyzed Netflix's audience behavior and measurement systems as the potential basis for a more-reliable audience measurement model. UOC's Elena Neira said, "At present, viewers can decide how, where and when to watch a series or a film, and therefore the traditional audience measurement models are not capable of covering the new consumer reality fully." Neira said the number of people who have shifted from watching traditional television to online content is unknown. The researchers said an improved audience measurement model should take into consideration such things as audience fragmentation; viewing intensity, or binge watching; and external factors like social media usage, numbers of downloads, and numbers of Google searches. | [] | [] | [] | scitechnews | None | None | None | None | A study by researchers at Spain's Universitat Oberta de Catalunya (UOC) analyzed Netflix's audience behavior and measurement systems as the potential basis for a more-reliable audience measurement model. UOC's Elena Neira said, "At present, viewers can decide how, where and when to watch a series or a film, and therefore the traditional audience measurement models are not capable of covering the new consumer reality fully." Neira said the number of people who have shifted from watching traditional television to online content is unknown. The researchers said an improved audience measurement model should take into consideration such things as audience fragmentation; viewing intensity, or binge watching; and external factors like social media usage, numbers of downloads, and numbers of Google searches.
In recent years the boom in streaming platforms and video on demand services has led to disruption in audiences , representing a difficulty when measuring the number of viewers of the content distributed by these platforms.
This new situation has not only altered the traditional television and film viewing model , but also has impacted the advertising market , which is a fundamental factor in funding and the business of audiovisual entertainment.
In this context, real and objective audience measurement (which is not influenced by the interests of the platforms) has become a key objective; it is fundamental to obtain real-time data on the reach of each production released so as to analyse its performance, know its market position, meet user demands and develop profitable services.
A recent study performed by researchers from the Universitat Oberta de Catalunya (UOC) analysed audience behaviour and measurement systems on the Netflix streaming platform and video on demand service. Their aim was to establish a more reliable audience measurement model.
"The audience has been the main financial driving force of television while advertising has been its main source of income, and therefore, for an evolving audiovisual sector, it is crucial to have accurate viewer and user numbers," explained Elena Neira, a researcher from the GAME group of the UOC Faculty of Information and Communication Sciences and the main author of the study.
New consumer habits influence audience measurement
The proliferation of streaming platforms and video on demand services has exponentially increased the quantity of content offered to users, leading them to change the way in which they watch series and films.
These new consumer habits have generated a new TV and video ecosystem which, among other factors, stems from a wider variety of devices on which people can view content, such as Smart TVs, smartphones, computers or tablets .
"At present, viewers can decide how, where and when to watch a series or a film, and therefore the traditional audience measurement models are not capable of covering the new consumer reality fully . Indeed, the idea of an audience in the sphere of streaming goes far beyond and is much more complex than the simple accumulation of viewings ," said Neira, who also stressed that, at present, nobody knows the market share or average use of the platforms, or how many people have abandoned traditional television because they can watch content online.
"Our objective is to offer a starting point and to study in depth the real market share of streaming in the framework of the system's structure. We also want to offer some certainty and information that is of value to everyone but in particular for the television companies and the creators," the UOC researcher underlined.
To analyse this new TV and video ecosystem, the experts chose to assess the production Money Heist , since it allowed them to measure the success and popularity of the series through different channels such as traditional television and a streaming and video on demand platform, Netflix.
Since being included among the content offered by Netflix , this Spanish-made production has become a worldwide phenomenon , for which there are no specific audience data.
New parameters for reliable audience measurement
The researchers from the UOC indicate that the concept of audience has been altered in this new TV and video ecosystem . This is due to the evolution of its parameters , which now include new metrics such as audience retention or the popularity of the content , which are difficult to standardize for measurement.
It is thought that, in order to be able to carry out better audience measurements, factors should be taken into account such as audience fragmentation , the need to weight the data collected, giving importance to variables such as viewing intensity - the famous binge watching - or the volatility of the streaming platform's users . In this respect, the researcher Elena Neira stressed that "we must include new dimensions , since the new concept of audience includes aspects that are especially relevant such as the users' commitment to or involvement with the content and the depth of attention of each viewer."
The heterogeneity intrinsic in the business model of these platforms introduces elements which greatly hinder the construction of a standard and global audience concept. For example, unlike traditional television channels, there is not a level playing field for streaming platforms as regards household consumption and penetration, market share and availability.
All this leads to audience measurement distortions , which may be more biased on taking into account the lifecycle of the content on a streaming platform, since this will be a significant factor determining the number of viewers.
Other external factors which may influence reliable audience measurement are the impact achieved on social media , the number of downloads or the number of searches carried out on search engines like Google .
"The use of streaming platforms is a mainstream activity ; they are gaining more and more hours of the population's entertainment time. This not only affects the sector, but also has legislative implications, since these business models do not have the same regulations as traditional television companies and should have certain obligations to be able to determine the size of their contribution to the state coffers," said Neira, who warned about how the platforms only tend to provide overall audience data, without figures that are specific to each territory.
This UOC research supports sustainable development goal (SDG) 8, decent work and economic growth, and industry, innovation and infrastructure .
Reference article
Neira, Elena; Clares-Gavilán, Judith; Sánchez-Navarro, Jordi (2021). "New audience dimensions in streaming platforms: the second life of Money heist on Netflix as a case study." Profesional de la información, v. 30, n. 1, e300113. https://doi.org/10.3145/epi.2021.ene.13
UOC R&I
The UOC's research and innovation (R&I) is helping overcome pressing challenges faced by global societies in the 21st century, by studying interactions between technology and human & social sciences with a specific focus on the network society , e-learning and e-health . Over 500 researchers and 51 research groups work among the University's seven faculties and two research centres: the Internet Interdisciplinary Institute ( IN3 ) and the eHealth Center ( eHC ).
The United Nations' 2030 Agenda for Sustainable Development and open knowledge serve as strategic pillars for the UOC's teaching, research and innovation. More information: research.uoc.edu . #UOC25years |
|||
582 | Millions of Devices at Risk From NAME:WRECK DNS Bugs | More than 100 million connected internet of things (IoT) devices, as many as 36,000 of them physically located in the UK, are thought to be at risk from nine newly disclosed DNS vulnerabilities, discovered by Forescout Research Labs and JSOF , and collectively dubbed NAME:WRECK.
The NAME:WRECK bugs affect four well-used TCP/IP stacks , FreeBSD, IPnet, Nucleus NET and NetX, which are present in well-known IT software and IoT/OT firmware.
FreeBSD, for example, runs on high-performance servers on millions of networks and is used on other well-known open source projects such as firewalls and some commercial network appliances. Nucleus NET has over three billion known installations in medical devices, avionics systems and building automation. NetX, meanwhile, runs in medical devices, systems-on-a-chip and several types of printer, as well as energy and power equipment in industrial control systems (ICS).
As a result of this, the vulnerabilities impact organisations in multiple sectors, from government to healthcare, manufacturing and retail, and if successfully exploited by malicious actors in a denial of service (DoS) or remote code execution (RCE) attack, could be used to disrupt or take control of victim networks.
"NAME:WRECK is a significant and widespread set of vulnerabilities with the potential for large-scale disruption," said Daniel dos Santos, research manager at Forescout Research Labs. "Complete protection against NAME:WRECK requires patching devices running the vulnerable versions of the IP stacks and so we encourage all organisations to make sure they have the most up-to-date patches for any devices running across these affected IP stacks.
"Unless urgent action is taken to adequately protect networks and the devices connected to them, it could be just a matter of time until these vulnerabilities are exploited, potentially resulting in major government data hacks, manufacturer disruption or hotel guest safety and security."
Although FreeBSD, Nucleus NET and NetX have all been patched recently, as with many other vulnerabilities affecting deployed IoT devices, NAME:WRECK will inevitably be hard to patch in some instances because nowadays, IoT technology is often deeply embedded in organisational systems, can be hard to manage, and often essentially impossible to patch.
In the light of this, Forescout and JSOF are recommending a series of mitigations:
NAME:WRECK is the second major set of TCP/IP vulnerabilities uncovered by Forescout's team in the past year as part of a research programme called Project Memoria.
In December 2020, the firm issued a warning over 33 different flaws, referred to as Amnesia33 , affecting devices made by over 150 different tech manufacturers. Such was the scale of the Amnesia33 disclosure that it prompted an emergency alert from the US Cyber Security and Infrastructure Security Agency. | Researchers at cybersecurity provider Forescout Research Labs and Israeli cybersecurity consultancy JSOF discovered nine new Domain Name System (DNS) vulnerabilities that could imperil more than 100 million connected Internet of Things (IoT) devices, at least a third of them located in the U.K. Collectively designated NAME:WRECK, the bugs affect four popular Transmission Control Protocol/Internet Protocol (TCP/IP) stacks: FreeBSD, IPnet, Nucleus NET, and NetX. Malefactors who exploit the vulnerabilities in a denial of service or remote code execution attack could disrupt or hijack targeted networks. Forescout's Daniel dos Santos said, "Complete protection against NAME:WRECK requires patching devices running the vulnerable versions of the IP stacks, and so we encourage all organizations to make sure they have the most up-to-date patches for any devices running across these affected IP stacks." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at cybersecurity provider Forescout Research Labs and Israeli cybersecurity consultancy JSOF discovered nine new Domain Name System (DNS) vulnerabilities that could imperil more than 100 million connected Internet of Things (IoT) devices, at least a third of them located in the U.K. Collectively designated NAME:WRECK, the bugs affect four popular Transmission Control Protocol/Internet Protocol (TCP/IP) stacks: FreeBSD, IPnet, Nucleus NET, and NetX. Malefactors who exploit the vulnerabilities in a denial of service or remote code execution attack could disrupt or hijack targeted networks. Forescout's Daniel dos Santos said, "Complete protection against NAME:WRECK requires patching devices running the vulnerable versions of the IP stacks, and so we encourage all organizations to make sure they have the most up-to-date patches for any devices running across these affected IP stacks."
More than 100 million connected internet of things (IoT) devices, as many as 36,000 of them physically located in the UK, are thought to be at risk from nine newly disclosed DNS vulnerabilities, discovered by Forescout Research Labs and JSOF , and collectively dubbed NAME:WRECK.
The NAME:WRECK bugs affect four well-used TCP/IP stacks , FreeBSD, IPnet, Nucleus NET and NetX, which are present in well-known IT software and IoT/OT firmware.
FreeBSD, for example, runs on high-performance servers on millions of networks and is used on other well-known open source projects such as firewalls and some commercial network appliances. Nucleus NET has over three billion known installations in medical devices, avionics systems and building automation. NetX, meanwhile, runs in medical devices, systems-on-a-chip and several types of printer, as well as energy and power equipment in industrial control systems (ICS).
As a result of this, the vulnerabilities impact organisations in multiple sectors, from government to healthcare, manufacturing and retail, and if successfully exploited by malicious actors in a denial of service (DoS) or remote code execution (RCE) attack, could be used to disrupt or take control of victim networks.
"NAME:WRECK is a significant and widespread set of vulnerabilities with the potential for large-scale disruption," said Daniel dos Santos, research manager at Forescout Research Labs. "Complete protection against NAME:WRECK requires patching devices running the vulnerable versions of the IP stacks and so we encourage all organisations to make sure they have the most up-to-date patches for any devices running across these affected IP stacks.
"Unless urgent action is taken to adequately protect networks and the devices connected to them, it could be just a matter of time until these vulnerabilities are exploited, potentially resulting in major government data hacks, manufacturer disruption or hotel guest safety and security."
Although FreeBSD, Nucleus NET and NetX have all been patched recently, as with many other vulnerabilities affecting deployed IoT devices, NAME:WRECK will inevitably be hard to patch in some instances because nowadays, IoT technology is often deeply embedded in organisational systems, can be hard to manage, and often essentially impossible to patch.
In the light of this, Forescout and JSOF are recommending a series of mitigations:
NAME:WRECK is the second major set of TCP/IP vulnerabilities uncovered by Forescout's team in the past year as part of a research programme called Project Memoria.
In December 2020, the firm issued a warning over 33 different flaws, referred to as Amnesia33 , affecting devices made by over 150 different tech manufacturers. Such was the scale of the Amnesia33 disclosure that it prompted an emergency alert from the US Cyber Security and Infrastructure Security Agency. |
|||
583 | China's Factories Automate as Worker Shortage Looms | China's working-age population has dropped by more than 5 million in the last decade, and factories are responding to the resulting labor shortage with automation. Home appliance giant Midea, for instance, has rolled out a three-year plan to outfit its 34 factories with more technology. Midea's Shirley Zhou said its two factories that have implemented sensors and robots have seen an almost 30% jump in assembly efficiency. Datacenter operator Equinix's Jeremy Deutsch said technology to track and analyze global production is of particular interest, and factory digitalization is fueling demand for datacenters. Said Victor Du at consulting firm Alvarez & Marsal Asia, "As a society, the concern should (be) achieving the same level of manufacturing output, or even higher quality, higher output, with a lower population after 20, 30 years. If you look at this point, digitalization or upgrading of technology will be very necessary." | [] | [] | [] | scitechnews | None | None | None | None | China's working-age population has dropped by more than 5 million in the last decade, and factories are responding to the resulting labor shortage with automation. Home appliance giant Midea, for instance, has rolled out a three-year plan to outfit its 34 factories with more technology. Midea's Shirley Zhou said its two factories that have implemented sensors and robots have seen an almost 30% jump in assembly efficiency. Datacenter operator Equinix's Jeremy Deutsch said technology to track and analyze global production is of particular interest, and factory digitalization is fueling demand for datacenters. Said Victor Du at consulting firm Alvarez & Marsal Asia, "As a society, the concern should (be) achieving the same level of manufacturing output, or even higher quality, higher output, with a lower population after 20, 30 years. If you look at this point, digitalization or upgrading of technology will be very necessary."
|
||||
584 | Speeding Up Sequence Alignment Across the Tree of Life | A sequence search engine for a new era of conservation genomics
A team of researchers from the Max Planck Institutes of Developmental Biology in Tübingen and the Max Planck Computing and Data Facility in Garching develops new search capabilities that will allow to compare the biochemical makeup of different species from across the tree of life. Its combination of accuracy and speed is hitherto unrivalled.
Humans share many sequences of nucleotides that make up our genes with other species - with pigs in particular, but also with mice and even bananas. Accordingly, some proteins in our bodies - strings of amino acids assembled according to the blueprint of the genes - can also be the same as (or similar to) some proteins in other species. These similarities might sometimes indicate that two species have a common ancestry, or they may simply come about if the evolutionary need for a certain feature or molecular function happens to arise in the two species.
But of course, finding out what you share with a pig or a banana can be a monumental task; the search of a database with all the information about you, the pig, and the banana is computationally quite involved. Researchers are expecting that the genomes of more than 1.5 million eukaryotic species - that includes all animals, plants, and mushrooms - will be sequenced within the next decade. "Even now, with only hundreds of thousand genomes available (mostly representing small genomes of bacteria and viruses), we are already looking at databases with up to 370 million sequences. Most current search tools would simply be impracticable and take too long to analyze data of the magnitude that we are expecting in the near future," explains Hajk-Georg Drost, Computational Biology group leader in the Department of Molecular Biology of the Max Planck Institute of Developmental Biology in Tübingen.
"For a long time, the gold standard for this kind of analyses used to be a tool called BLAST," recalls Drost. "If you tried to trace how a protein was maintained by natural selection or how it developed in different phylogenetic lineages, BLAST gave you the best matches at this scale. But it is foreseeable that at some point the databases will grow too large for comprehensive BLAST searches."
At the core of the problem is a tradeoff between speed versus sensitivity: just like you will miss some small or well-hidden Easter eggs if you scan a room only briefly, speeding up the search for similarities of protein sequences in a database typically comes with downside of missing some of the less obvious matches.
This is why some time ago, we started to extend DIAMOND, in the hope that it would allow us to deal with large datasets in a reasonable amount of time and sensitivity," remembers Benjamin Buchfink, collaborator and PhD student in Drost's research group since 2019. DIAMOND was initially developed for metagenomics applications in Daniel Huson's research group at the University of Tübingen with Benjamin already a main contributor to the initial versions. "It did, but it also came with a downside," Buchfink continues: "It couldn't pick up some of the more distant evolutionary relationships." That means that while the original DIAMOND may have been sensitive enough to detect a given human amino acid sequence in a chimpanzee, it may have been blind to the occurrence of a similar sequence in an evolutionary more remote species.
While being useful for studying material that was directly extracted from environmental samples, other research goals require more sensitive tools than the original DIAMOND search algorithm. The team of researchers from Tübingen and Garching was now able to modify and extend DIAMOND to make it as sensitive as BLAST while maintaining its superior speed: with the improved DIAMOND, researchers will be able to do comparative genomics research with the accuracy of BLAST at an 80- to 360-fold computational speedup. "In addition, DIAMOND enables researchers to perform alignments with BLAST-like sensitivity on a supercomputer, a high-performance computing cluster, or the Cloud in a truly massively parallel fashion, making extremely large-scale sequence alignments possible in tractable time," adds Klaus Reuter, collaborator from the Max Planck Computing and Data Facility."
Some queries that would have taken other tools two months on a supercomputer can be accomplished in several hours with the new DIAMOND infrastructure. "Considering the exponential growth of the number of available genomes, the speed and accuracy of DIAMOND are exactly what modern genomics will need to learn from the entire collection of all genomes rather than having to focus only on a smaller number of particular species due to a lack of sensitive search capacity," Drost predicts. The team is thus convinced that the full advantages of DIAMOND will become apparent in the years to come. | Scientists at the Max Planck Institutes of Developmental Biology and the Max Planck Computing and Data Facility (MPCDF) in Germany have developed a sequence search engine that compares the biochemical composition of different species with unmatched accuracy and speed. The tool extends the DIAMOND search algorithm to retain its speed while gaining the sensitivity of the basic local alignment search tool (BLAST) algorithm used for comparing primary biological sequence information. The enhanced DIAMOND will let researchers conduct comparative genomics research with BLAST-level accuracy, 80 to 360 times faster. MPCDF's Klaus Reuter said, "In addition, DIAMOND enables researchers to perform alignments with BLAST-like sensitivity on a supercomputer, a high-performance computing cluster, or the cloud in a truly massively parallel fashion, making extremely large-scale sequence alignments possible in tractable time." | [] | [] | [] | scitechnews | None | None | None | None | Scientists at the Max Planck Institutes of Developmental Biology and the Max Planck Computing and Data Facility (MPCDF) in Germany have developed a sequence search engine that compares the biochemical composition of different species with unmatched accuracy and speed. The tool extends the DIAMOND search algorithm to retain its speed while gaining the sensitivity of the basic local alignment search tool (BLAST) algorithm used for comparing primary biological sequence information. The enhanced DIAMOND will let researchers conduct comparative genomics research with BLAST-level accuracy, 80 to 360 times faster. MPCDF's Klaus Reuter said, "In addition, DIAMOND enables researchers to perform alignments with BLAST-like sensitivity on a supercomputer, a high-performance computing cluster, or the cloud in a truly massively parallel fashion, making extremely large-scale sequence alignments possible in tractable time."
A sequence search engine for a new era of conservation genomics
A team of researchers from the Max Planck Institutes of Developmental Biology in Tübingen and the Max Planck Computing and Data Facility in Garching develops new search capabilities that will allow to compare the biochemical makeup of different species from across the tree of life. Its combination of accuracy and speed is hitherto unrivalled.
Humans share many sequences of nucleotides that make up our genes with other species - with pigs in particular, but also with mice and even bananas. Accordingly, some proteins in our bodies - strings of amino acids assembled according to the blueprint of the genes - can also be the same as (or similar to) some proteins in other species. These similarities might sometimes indicate that two species have a common ancestry, or they may simply come about if the evolutionary need for a certain feature or molecular function happens to arise in the two species.
But of course, finding out what you share with a pig or a banana can be a monumental task; the search of a database with all the information about you, the pig, and the banana is computationally quite involved. Researchers are expecting that the genomes of more than 1.5 million eukaryotic species - that includes all animals, plants, and mushrooms - will be sequenced within the next decade. "Even now, with only hundreds of thousand genomes available (mostly representing small genomes of bacteria and viruses), we are already looking at databases with up to 370 million sequences. Most current search tools would simply be impracticable and take too long to analyze data of the magnitude that we are expecting in the near future," explains Hajk-Georg Drost, Computational Biology group leader in the Department of Molecular Biology of the Max Planck Institute of Developmental Biology in Tübingen.
"For a long time, the gold standard for this kind of analyses used to be a tool called BLAST," recalls Drost. "If you tried to trace how a protein was maintained by natural selection or how it developed in different phylogenetic lineages, BLAST gave you the best matches at this scale. But it is foreseeable that at some point the databases will grow too large for comprehensive BLAST searches."
At the core of the problem is a tradeoff between speed versus sensitivity: just like you will miss some small or well-hidden Easter eggs if you scan a room only briefly, speeding up the search for similarities of protein sequences in a database typically comes with downside of missing some of the less obvious matches.
This is why some time ago, we started to extend DIAMOND, in the hope that it would allow us to deal with large datasets in a reasonable amount of time and sensitivity," remembers Benjamin Buchfink, collaborator and PhD student in Drost's research group since 2019. DIAMOND was initially developed for metagenomics applications in Daniel Huson's research group at the University of Tübingen with Benjamin already a main contributor to the initial versions. "It did, but it also came with a downside," Buchfink continues: "It couldn't pick up some of the more distant evolutionary relationships." That means that while the original DIAMOND may have been sensitive enough to detect a given human amino acid sequence in a chimpanzee, it may have been blind to the occurrence of a similar sequence in an evolutionary more remote species.
While being useful for studying material that was directly extracted from environmental samples, other research goals require more sensitive tools than the original DIAMOND search algorithm. The team of researchers from Tübingen and Garching was now able to modify and extend DIAMOND to make it as sensitive as BLAST while maintaining its superior speed: with the improved DIAMOND, researchers will be able to do comparative genomics research with the accuracy of BLAST at an 80- to 360-fold computational speedup. "In addition, DIAMOND enables researchers to perform alignments with BLAST-like sensitivity on a supercomputer, a high-performance computing cluster, or the Cloud in a truly massively parallel fashion, making extremely large-scale sequence alignments possible in tractable time," adds Klaus Reuter, collaborator from the Max Planck Computing and Data Facility."
Some queries that would have taken other tools two months on a supercomputer can be accomplished in several hours with the new DIAMOND infrastructure. "Considering the exponential growth of the number of available genomes, the speed and accuracy of DIAMOND are exactly what modern genomics will need to learn from the entire collection of all genomes rather than having to focus only on a smaller number of particular species due to a lack of sensitive search capacity," Drost predicts. The team is thus convinced that the full advantages of DIAMOND will become apparent in the years to come. |
|||
585 | French Army Testing Boston Dynamics' Robot Dog Spot in Combat Scenarios | Spot, the quadruped robot built by US firm Boston Dynamics, has appeared alongside soldiers during military exercises carried out by the French army. The robot was apparently being used for reconnaissance during a two-day training exercise, but the deployment raises questions about how and where Boston Dynamics' machines will be used in future.
Pictures of the exercises were shared on Twitter by France's foremost military school, the École Spéciale Militaire de Saint-Cyr. It described the tests as "raising students' awareness of the challenges of tomorrow," which include the "robotization of the battlefield."
A report by French newspaper Ouest-France offers more detail, saying that Spot was one of a number of robots being tested by students from France's École Militaire Interarmes (Combined Arms School), with the intention of assessing the usefulness of robots on future battlefields.
Boston Dynamics' vice president of business development Michael Perry told The Verge that the robot had been supplied by a European distributor, Shark Robotics, and that the US firm had not been notified about this particular use. "We're learning about it as you are," says Perry. "We're not clear on the exact scope of this engagement." The company says it was aware that its robots were being used with the French government, including the military.
During the two-day deployment, Ouest-France says soldiers ran a number of scenarios, including an offensive action capturing a crossroads, defensive actions during night and day, and an urban combat test. Each scenario was performed using just humans and then using humans and robots together to see what difference the machines made.
Sources quoted in the article say that the robots slowed down operations but helped keep troops safe. "During the urban combat phase where we weren't using robots, I died. But I didn't die when we had the robot do a recce first," one soldier is quoted as saying. They added that one problem was Spot's battery life: it apparently ran out of juice during an exercise and had to be carried out.
It's not clear what role Spot was playing (neither Shark Robotics nor the École de Saint-Cyr had replied to requests for comment at the time of writing), but Ouest-France suggests it was being used for reconnaissance. The 70lb Spot (31kg) is equipped with cameras and can be remote controlled, with its four legs allowing it to navigate terrain that would challenge wheeled or treaded robots. To date, it's been used to remotely survey a number of environments, from construction sites to factories and underground mines.
In addition to Spot, other machines being tested by the French military included OPTIO-X20 , a remote-controlled vehicle with tank treads and auto cannon built by Estonian firm Milrem Robotics; ULTRO , a wheeled "robot mule" made for carrying equipment built by French state military firm Nexter; and Barakuda , a multipurpose wheeled drone that can provide mobile cover to soldiers with attached armored plating.
Spot's appearance on simulated battlefields raises questions about where the robot will be deployed in future. Boston Dynamics has a long history of developing robots for the US army, but as it's moved into commercial markets it's distanced itself from military connections. Spot is still being tested by a number of US police forces, including by the NYPD , but Boston Dynamics has always stressed that its machines will never be armed. "We unequivocally do not want any customer using the robot to harm people," says Perry.
Spot's terms and conditions forbid it from being used "to harm or intimidate any person or animal, as a weapon, or to enable any weapon," and it's possible to argue that a robot helping to scout buildings for soldiers is not technically harming or intimidating anyone. But if that recon is the prelude to a military engagement it seems like a flimsy distinction.
Boston Dynamics' Perry told The Verge that the company had clear policies forbidding suppliers or customers from weaponizing the robot, but that the firm is "still evaluating" whether or not to ban non-weaponized deployments by military customers.
"We think that the military, to the extent that they do use robotics to take people out of harm's way, we think that's a perfectly valid use of the technology," says Perry. "With this forward-deployment model that you're discussing, it's something we need to better understand to determine whether or not it's actively being used to harm people."
Despite worries from researchers and advocates, militaries around the world are increasingly pushing robots onto the battlefield . Remotely operated drones have been the most significant deployment to date, but other use cases - including robots that can scout, survey, and patrol - are also being tested. Robotic quadrupeds similar to Spot built by rival firm Ghost Robotics are currently being tested by the US Air Force as replacements for stationary surveillance cameras. If robots prove reliable as roaming CCTV, it's only a matter of time before those capabilities are introduced to active combat zones.
Additional reporting by Aude White .
Update April 8th, 10:48AM ET: Updated to clarify that Boston Dynamics was aware that its robots were being used with the French military, but not for this specific two-day exercise. | France's military used robotics developer Boston Dynamics' four-legged Spot robot in training exercises, raising questions about future applications. The French military school École Spéciale Militaire de Saint-Cyr described the tests as "raising students' awareness of the challenges of tomorrow," including the "robotization of the battlefield." A more detailed account by French newspaper Ouest-France said Spot was one of a number of robots tested by students at the École Militaire Interarmes (Combined Arms School), apparently to evaluate their reconnaissance utility. Spot's terms and conditions prohibit its use "to harm or intimidate any person or animal, as a weapon, or to enable any weapon," and Boston Dynamics' Michael Perry said he is adamantly against the robot's weaponization. | [] | [] | [] | scitechnews | None | None | None | None | France's military used robotics developer Boston Dynamics' four-legged Spot robot in training exercises, raising questions about future applications. The French military school École Spéciale Militaire de Saint-Cyr described the tests as "raising students' awareness of the challenges of tomorrow," including the "robotization of the battlefield." A more detailed account by French newspaper Ouest-France said Spot was one of a number of robots tested by students at the École Militaire Interarmes (Combined Arms School), apparently to evaluate their reconnaissance utility. Spot's terms and conditions prohibit its use "to harm or intimidate any person or animal, as a weapon, or to enable any weapon," and Boston Dynamics' Michael Perry said he is adamantly against the robot's weaponization.
Spot, the quadruped robot built by US firm Boston Dynamics, has appeared alongside soldiers during military exercises carried out by the French army. The robot was apparently being used for reconnaissance during a two-day training exercise, but the deployment raises questions about how and where Boston Dynamics' machines will be used in future.
Pictures of the exercises were shared on Twitter by France's foremost military school, the École Spéciale Militaire de Saint-Cyr. It described the tests as "raising students' awareness of the challenges of tomorrow," which include the "robotization of the battlefield."
A report by French newspaper Ouest-France offers more detail, saying that Spot was one of a number of robots being tested by students from France's École Militaire Interarmes (Combined Arms School), with the intention of assessing the usefulness of robots on future battlefields.
Boston Dynamics' vice president of business development Michael Perry told The Verge that the robot had been supplied by a European distributor, Shark Robotics, and that the US firm had not been notified about this particular use. "We're learning about it as you are," says Perry. "We're not clear on the exact scope of this engagement." The company says it was aware that its robots were being used with the French government, including the military.
During the two-day deployment, Ouest-France says soldiers ran a number of scenarios, including an offensive action capturing a crossroads, defensive actions during night and day, and an urban combat test. Each scenario was performed using just humans and then using humans and robots together to see what difference the machines made.
Sources quoted in the article say that the robots slowed down operations but helped keep troops safe. "During the urban combat phase where we weren't using robots, I died. But I didn't die when we had the robot do a recce first," one soldier is quoted as saying. They added that one problem was Spot's battery life: it apparently ran out of juice during an exercise and had to be carried out.
It's not clear what role Spot was playing (neither Shark Robotics nor the École de Saint-Cyr had replied to requests for comment at the time of writing), but Ouest-France suggests it was being used for reconnaissance. The 70lb Spot (31kg) is equipped with cameras and can be remote controlled, with its four legs allowing it to navigate terrain that would challenge wheeled or treaded robots. To date, it's been used to remotely survey a number of environments, from construction sites to factories and underground mines.
In addition to Spot, other machines being tested by the French military included OPTIO-X20 , a remote-controlled vehicle with tank treads and auto cannon built by Estonian firm Milrem Robotics; ULTRO , a wheeled "robot mule" made for carrying equipment built by French state military firm Nexter; and Barakuda , a multipurpose wheeled drone that can provide mobile cover to soldiers with attached armored plating.
Spot's appearance on simulated battlefields raises questions about where the robot will be deployed in future. Boston Dynamics has a long history of developing robots for the US army, but as it's moved into commercial markets it's distanced itself from military connections. Spot is still being tested by a number of US police forces, including by the NYPD , but Boston Dynamics has always stressed that its machines will never be armed. "We unequivocally do not want any customer using the robot to harm people," says Perry.
Spot's terms and conditions forbid it from being used "to harm or intimidate any person or animal, as a weapon, or to enable any weapon," and it's possible to argue that a robot helping to scout buildings for soldiers is not technically harming or intimidating anyone. But if that recon is the prelude to a military engagement it seems like a flimsy distinction.
Boston Dynamics' Perry told The Verge that the company had clear policies forbidding suppliers or customers from weaponizing the robot, but that the firm is "still evaluating" whether or not to ban non-weaponized deployments by military customers.
"We think that the military, to the extent that they do use robotics to take people out of harm's way, we think that's a perfectly valid use of the technology," says Perry. "With this forward-deployment model that you're discussing, it's something we need to better understand to determine whether or not it's actively being used to harm people."
Despite worries from researchers and advocates, militaries around the world are increasingly pushing robots onto the battlefield . Remotely operated drones have been the most significant deployment to date, but other use cases - including robots that can scout, survey, and patrol - are also being tested. Robotic quadrupeds similar to Spot built by rival firm Ghost Robotics are currently being tested by the US Air Force as replacements for stationary surveillance cameras. If robots prove reliable as roaming CCTV, it's only a matter of time before those capabilities are introduced to active combat zones.
Additional reporting by Aude White .
Update April 8th, 10:48AM ET: Updated to clarify that Boston Dynamics was aware that its robots were being used with the French military, but not for this specific two-day exercise. |
|||
587 | Facebook Algorithm Shows Gender Bias in Job Ads, Study Finds | Facebook Inc. disproportionately shows certain types of job ads to men and women, researchers have found, calling into question the company's progress in rooting out bias in its algorithms.
The study led by University of Southern California researchers found that Facebook systems were more likely to present job ads to users if their gender identity reflected the concentration of that gender in a particular position or industry. In tests run late last year, ads to recruit delivery drivers for Domino's Pizza Inc. were disproportionately shown to men, while women were more likely to receive notices in recruiting shoppers for grocery-delivery service Instacart Inc. | Facebook disproportionately shows certain types of job ads to men and women, raising questions about efforts to eliminate bias in its algorithms, according to University of Southern California (USC) researchers. Facebook systems were more likely to present ads to users if their gender identity mirrored the concentration of that gender in a specific position or industry. This bias extended through all tiers of jobs, suggesting "a platform whose algorithm learns and perpetuates the existing difference in employee demographics." The study highlights Facebook's problems in understanding and addressing the societal effects of its content-recommendation systems, and USC's Aleksandra Korolova said she was surprised at the company's failure to remedy the situation, because "they've known about this for years." | [] | [] | [] | scitechnews | None | None | None | None | Facebook disproportionately shows certain types of job ads to men and women, raising questions about efforts to eliminate bias in its algorithms, according to University of Southern California (USC) researchers. Facebook systems were more likely to present ads to users if their gender identity mirrored the concentration of that gender in a specific position or industry. This bias extended through all tiers of jobs, suggesting "a platform whose algorithm learns and perpetuates the existing difference in employee demographics." The study highlights Facebook's problems in understanding and addressing the societal effects of its content-recommendation systems, and USC's Aleksandra Korolova said she was surprised at the company's failure to remedy the situation, because "they've known about this for years."
Facebook Inc. disproportionately shows certain types of job ads to men and women, researchers have found, calling into question the company's progress in rooting out bias in its algorithms.
The study led by University of Southern California researchers found that Facebook systems were more likely to present job ads to users if their gender identity reflected the concentration of that gender in a particular position or industry. In tests run late last year, ads to recruit delivery drivers for Domino's Pizza Inc. were disproportionately shown to men, while women were more likely to receive notices in recruiting shoppers for grocery-delivery service Instacart Inc. |
|||
588 | AI Could 'Crack the Language of Cancer, Alzheimer's' | Powerful algorithms used by Netflix, Amazon and Facebook can 'predict' the biological language of cancer and neurodegenerative diseases like Alzheimer's, scientists have found.
Big data produced during decades of research was fed into a computer language model to see if artificial intelligence can make more advanced discoveries than humans.
Academics based at St John's College, University of Cambridge, found the machine-learning technology could decipher the 'biological language' of cancer, Alzheimer's, and other neurodegenerative diseases.
Their ground-breaking study has been published in the scientific journal PNAS today and could be used in the future to 'correct the grammatical mistakes inside cells that cause disease'.
Professor Tuomas Knowles, lead author of the paper and a Fellow at St John's College, said: "Bringing machine-learning technology into research into neurodegenerative diseases and cancer is an absolute game-changer. Ultimately, the aim will be to use artificial intelligence to develop targeted drugs to dramatically ease symptoms or to prevent dementia happening at all."
Every time Netflix recommends a series to watch or Facebook suggests someone to befriend, the platforms are using powerful machine-learning algorithms to make highly educated guesses about what people will do next. Voice assistants like Alexa and Siri can even recognise individual people and instantly 'talk' back to you.
Dr Kadi Liis Saar, first author of the paper and a Research Fellow at St John's College, used similar machine-learning technology to train a large-scale language model to look at what happens when something goes wrong with proteins inside the body to cause disease.
She said: "The human body is home to thousands and thousands of proteins and scientists don't yet know the function of many of them. We asked a neural network based language model to learn the language of proteins.
"We specifically asked the programme to learn the language of shapeshifting biomolecular condensates - droplets of proteins found in cells - that scientists really need to understand to crack the language of biological function and malfunction that cause cancer and neurodegenerative diseases like Alzheimer's. We found it could learn, without being explicitly told, what scientists have already discovered about the language of proteins over decades of research."
Proteins are large, complex molecules that play many critical roles in the body. They do most of the work in cells and are required for the structure, function and regulation of the body's tissues and organs - antibodies, for example, are a protein that function to protect the body.
Alzheimer's, Parkinson's and Huntington's diseases are three of the most common neurodegenerative diseases, but scientists believe there are several hundred.
In Alzheimer's disease, which affects 50 million people worldwide, proteins go rogue, form clumps and kill healthy nerve cells. A healthy brain has a quality control system that effectively disposes of these potentially dangerous masses of proteins, known as aggregates.
Scientists now think that some disordered proteins also form liquid-like droplets of proteins called condensates that don't have a membrane and merge freely with each other. Unlike protein aggregates which are irreversible, protein condensates can form and reform and are often compared to blobs of shapeshifting wax in lava lamps.
Professor Knowles said: "Protein condensates have recently attracted a lot of attention in the scientific world because they control key events in the cell such as gene expression - how our DNA is converted into proteins - and protein synthesis - how the cells make proteins.
"Any defects connected with these protein droplets can lead to diseases such as cancer. This is why bringing natural language processing technology into research into the molecular origins of protein malfunction is vital if we want to be able to correct the grammatical mistakes inside cells that cause disease."
Dr Saar said: "We fed the algorithm all of data held on the known proteins so it could learn and predict the language of proteins in the same way these models learn about human language and how WhatsApp knows how to suggest words for you to use.
"Then we were able ask it about the specific grammar that leads only some proteins to form condensates inside cells. It is a very challenging problem and unlocking it will help us learn the rules of the language of disease."
The machine-learning technology is developing at a rapid pace due to the growing availability of data, increased computing power, and technical advances which have created more powerful algorithms.
Further use of machine-learning could transform future cancer and neurodegenerative disease research.
Discoveries could be made beyond what scientists currently already know and speculate about diseases and potentially even beyond what the human brain can understand without the help of machine-learning.
Dr Saar explained: "Machine-learning can be free of the limitations of what researchers think are the targets for scientific exploration and it will mean new connections will be found that we have not even conceived of yet. It is really very exciting indeed."
The network developed has now been made freely available to researchers around the world to enable advances to be worked on by more scientists.
Published: 8/4/2021
Back to College News | A study by researchers at St. John's College, University of Cambridge in the U.K. found that the "biological language" of cancer, Alzheimer's, and other neurodegenerative diseases can be predicted by machine learning. The researchers used algorithms similar to those employed by Netflix, Facebook, and voice assistants like Alexa and Siri to train a neural network-based language model to study biomolecular condensates. St. John's Tuomas Knowles said, "Any defects connected with these protein droplets can lead to diseases such as cancer. This is why bringing natural language processing technology into research into the molecular origins of protein malfunction is vital if we want to be able to correct the grammatical mistakes inside cells that cause disease." | [] | [] | [] | scitechnews | None | None | None | None | A study by researchers at St. John's College, University of Cambridge in the U.K. found that the "biological language" of cancer, Alzheimer's, and other neurodegenerative diseases can be predicted by machine learning. The researchers used algorithms similar to those employed by Netflix, Facebook, and voice assistants like Alexa and Siri to train a neural network-based language model to study biomolecular condensates. St. John's Tuomas Knowles said, "Any defects connected with these protein droplets can lead to diseases such as cancer. This is why bringing natural language processing technology into research into the molecular origins of protein malfunction is vital if we want to be able to correct the grammatical mistakes inside cells that cause disease."
Powerful algorithms used by Netflix, Amazon and Facebook can 'predict' the biological language of cancer and neurodegenerative diseases like Alzheimer's, scientists have found.
Big data produced during decades of research was fed into a computer language model to see if artificial intelligence can make more advanced discoveries than humans.
Academics based at St John's College, University of Cambridge, found the machine-learning technology could decipher the 'biological language' of cancer, Alzheimer's, and other neurodegenerative diseases.
Their ground-breaking study has been published in the scientific journal PNAS today and could be used in the future to 'correct the grammatical mistakes inside cells that cause disease'.
Professor Tuomas Knowles, lead author of the paper and a Fellow at St John's College, said: "Bringing machine-learning technology into research into neurodegenerative diseases and cancer is an absolute game-changer. Ultimately, the aim will be to use artificial intelligence to develop targeted drugs to dramatically ease symptoms or to prevent dementia happening at all."
Every time Netflix recommends a series to watch or Facebook suggests someone to befriend, the platforms are using powerful machine-learning algorithms to make highly educated guesses about what people will do next. Voice assistants like Alexa and Siri can even recognise individual people and instantly 'talk' back to you.
Dr Kadi Liis Saar, first author of the paper and a Research Fellow at St John's College, used similar machine-learning technology to train a large-scale language model to look at what happens when something goes wrong with proteins inside the body to cause disease.
She said: "The human body is home to thousands and thousands of proteins and scientists don't yet know the function of many of them. We asked a neural network based language model to learn the language of proteins.
"We specifically asked the programme to learn the language of shapeshifting biomolecular condensates - droplets of proteins found in cells - that scientists really need to understand to crack the language of biological function and malfunction that cause cancer and neurodegenerative diseases like Alzheimer's. We found it could learn, without being explicitly told, what scientists have already discovered about the language of proteins over decades of research."
Proteins are large, complex molecules that play many critical roles in the body. They do most of the work in cells and are required for the structure, function and regulation of the body's tissues and organs - antibodies, for example, are a protein that function to protect the body.
Alzheimer's, Parkinson's and Huntington's diseases are three of the most common neurodegenerative diseases, but scientists believe there are several hundred.
In Alzheimer's disease, which affects 50 million people worldwide, proteins go rogue, form clumps and kill healthy nerve cells. A healthy brain has a quality control system that effectively disposes of these potentially dangerous masses of proteins, known as aggregates.
Scientists now think that some disordered proteins also form liquid-like droplets of proteins called condensates that don't have a membrane and merge freely with each other. Unlike protein aggregates which are irreversible, protein condensates can form and reform and are often compared to blobs of shapeshifting wax in lava lamps.
Professor Knowles said: "Protein condensates have recently attracted a lot of attention in the scientific world because they control key events in the cell such as gene expression - how our DNA is converted into proteins - and protein synthesis - how the cells make proteins.
"Any defects connected with these protein droplets can lead to diseases such as cancer. This is why bringing natural language processing technology into research into the molecular origins of protein malfunction is vital if we want to be able to correct the grammatical mistakes inside cells that cause disease."
Dr Saar said: "We fed the algorithm all of data held on the known proteins so it could learn and predict the language of proteins in the same way these models learn about human language and how WhatsApp knows how to suggest words for you to use.
"Then we were able ask it about the specific grammar that leads only some proteins to form condensates inside cells. It is a very challenging problem and unlocking it will help us learn the rules of the language of disease."
The machine-learning technology is developing at a rapid pace due to the growing availability of data, increased computing power, and technical advances which have created more powerful algorithms.
Further use of machine-learning could transform future cancer and neurodegenerative disease research.
Discoveries could be made beyond what scientists currently already know and speculate about diseases and potentially even beyond what the human brain can understand without the help of machine-learning.
Dr Saar explained: "Machine-learning can be free of the limitations of what researchers think are the targets for scientific exploration and it will mean new connections will be found that we have not even conceived of yet. It is really very exciting indeed."
The network developed has now been made freely available to researchers around the world to enable advances to be worked on by more scientists.
Published: 8/4/2021
Back to College News |
|||
589 | Robots Can Be More Aware of Human Co-Workers, with System That Provides Context | Instead of being able to only judge distance between itself and its human co-workers, the human-robot collaboration system can identify each worker it works with, as well as the worker's skeleton model, which is an abstract of the worker's body volume, says Hongyi Liu , a researcher at KTH Royal Institute of Technology. Using this information, the context-aware robot system can recognize the worker's pose and even predict the next pose. These abilities provide the robot with a context to be aware of while interacting.
Liu says that the system operates with artificial intelligence that requires less computational power and smaller datasets than traditional machine learning methods. It relies instead on a form of machine learning called transfer learning - which reuses knowledge developed through training before being adapted into an operational model.
The research was published in the recent issue of Robotics and Computer-Integrated Manufacturing, and was co-authored by KTH Professor Lihui Wang .
Liu says that the technology is out ahead of today's International Organization for Standards (ISO) requirements for collaborative robot safety, so implementation of the technology would require industrial action. But the context awareness offers better efficiency than the one-dimensional interaction workers now experience with robots, he says.
"Under the ISO standard and technical specification, when a human approaches a robot it slows down, and if he or she comes close enough it will stop. If the person moves away it resumes. That's a pretty low level of context awareness," he says.
"It jeopardizes efficiency. Production is slowed and humans cannot work closely to robots."
Liu compares the context-aware robot system to a self-driving car that recognizes how long a stoplight has been red and anticipates moving again. Instead of braking or downshifting, it begins to adjust its speed by cruising toward the intersection, thereby sparing wear on the brakes and transmission.
Experiments with the system showed that with context, a robot can operate more safely and efficiently without slowing down production. In one test performed with the system, a robot arm's path was blocked unexpectedly by someone's hand. But rather than stop, the robot adjusted - it predicted the future trajectory of the hand and the arm moved around the hand.
"This is safety not just from the technical point of view in avoiding collisions, but being able to recognize the context of the assembly line," he says. "This gives an additional layer of safety."
The research was an extension of the EU Horizon 2020 project, Symbiotic Human Robot Collaborative Assembly (SYMBIO-TIC), which was completed in 2019.
David Callahan
Related stories Snart är din kollega en robot (Swedish) Robot project envisions factories where more people want to work Outstanding paper award to Lihui Wang and Azadeh Haghighi | Researchers at Sweden's KTH Royal Institute of Technology have designed a context-aware system that could enable robots to work alongside humans on assembly lines with greater efficiency, while foregoing needless interruptions. KTH's Hongyi Liu said the human-robot collaboration system can identify each co-worker and their skeleton model, which is an abstract of their body volume. This lets the system identify the worker's pose and even anticipate the next pose, providing a context for the robot to be cognizant of while interacting. Liu said the system uses artificial intelligence that is less computationally intensive and requires smaller datasets than traditional machine learning, via its reliance on transfer learning. Experiments demonstrated that increasing their contextual awareness allows robots to function more safely and efficiently without slowing production. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Sweden's KTH Royal Institute of Technology have designed a context-aware system that could enable robots to work alongside humans on assembly lines with greater efficiency, while foregoing needless interruptions. KTH's Hongyi Liu said the human-robot collaboration system can identify each co-worker and their skeleton model, which is an abstract of their body volume. This lets the system identify the worker's pose and even anticipate the next pose, providing a context for the robot to be cognizant of while interacting. Liu said the system uses artificial intelligence that is less computationally intensive and requires smaller datasets than traditional machine learning, via its reliance on transfer learning. Experiments demonstrated that increasing their contextual awareness allows robots to function more safely and efficiently without slowing production.
Instead of being able to only judge distance between itself and its human co-workers, the human-robot collaboration system can identify each worker it works with, as well as the worker's skeleton model, which is an abstract of the worker's body volume, says Hongyi Liu , a researcher at KTH Royal Institute of Technology. Using this information, the context-aware robot system can recognize the worker's pose and even predict the next pose. These abilities provide the robot with a context to be aware of while interacting.
Liu says that the system operates with artificial intelligence that requires less computational power and smaller datasets than traditional machine learning methods. It relies instead on a form of machine learning called transfer learning - which reuses knowledge developed through training before being adapted into an operational model.
The research was published in the recent issue of Robotics and Computer-Integrated Manufacturing, and was co-authored by KTH Professor Lihui Wang .
Liu says that the technology is out ahead of today's International Organization for Standards (ISO) requirements for collaborative robot safety, so implementation of the technology would require industrial action. But the context awareness offers better efficiency than the one-dimensional interaction workers now experience with robots, he says.
"Under the ISO standard and technical specification, when a human approaches a robot it slows down, and if he or she comes close enough it will stop. If the person moves away it resumes. That's a pretty low level of context awareness," he says.
"It jeopardizes efficiency. Production is slowed and humans cannot work closely to robots."
Liu compares the context-aware robot system to a self-driving car that recognizes how long a stoplight has been red and anticipates moving again. Instead of braking or downshifting, it begins to adjust its speed by cruising toward the intersection, thereby sparing wear on the brakes and transmission.
Experiments with the system showed that with context, a robot can operate more safely and efficiently without slowing down production. In one test performed with the system, a robot arm's path was blocked unexpectedly by someone's hand. But rather than stop, the robot adjusted - it predicted the future trajectory of the hand and the arm moved around the hand.
"This is safety not just from the technical point of view in avoiding collisions, but being able to recognize the context of the assembly line," he says. "This gives an additional layer of safety."
The research was an extension of the EU Horizon 2020 project, Symbiotic Human Robot Collaborative Assembly (SYMBIO-TIC), which was completed in 2019.
David Callahan
Related stories Snart är din kollega en robot (Swedish) Robot project envisions factories where more people want to work Outstanding paper award to Lihui Wang and Azadeh Haghighi |
|||
590 | Sticker Absorbs Sweat - and Might Diagnose Cystic Fibrosis | In the Middle Ages, a grim adage sometimes turned up in European folklore and children's stories: Woe to that child which when kissed on the forehead tastes salty. He is bewitched and soon must die. A salty-headed newborn was a frightful sign of a mysterious illness. The witchcraft diagnosis didn't hold, of course, but today researchers think that the salty taste warned of the genetic disease we now know as cystic fibrosis.
Cystic fibrosis affects over 30,000 people in the United States, and over 70,000 globally. Mutations in the CFTR gene garble cells' blueprints for making protein tunnels for chloride ions. Chloride's negative charge attracts water, so without much chloride meandering into cells, the body's mucus gets thicker and stickier, making breathing a struggle and often trapping dangerous bacteria in the lungs. It also disrupts digestive enzymes from traveling out of the pancreas and into the gut, causing inflammation and malnutrition.
Salty sweat is a telltale sign. Doctors sometimes meet kids with 10 times higher chloride levels in their sweat than expected. Since the 1960s, measuring chloride has given doctors their clearest diagnoses: They stimulate people's sweat glands, soak up as much as they can, and send the samples to labs. But the tools are expensive, bulky, and hard to fit onto squirming infants. Sometimes the tests don't collect enough fluid for a diagnosis. And if a test fails, parents and their newborn often have to wait a couple of weeks to come back.
"That failure to collect enough sweat just delays time to diagnosis," says Tyler Ray, a mechanical engineer with the University of Hawaii at Mānoa who develops wearable biosensors. That means losing precious weeks when doctors could have prescribed treatments. It also creates a barrier for folks who need to drive for hours - or fly over oceans - to reach a hospital that can run the test. "There are not many throughout the country," says Ray. "In fact, Hawaii does not have one for the general population."
Ray's team of engineers and pathologists think they have an alternative: stick-on sweat collectors. In a study published last week in Science Translational Medicine , they report creating a malleable, coin-sized sticker that changes color as it absorbs progressively higher salt concentrations indicative of cystic fibrosis. When tested on babies and adults, the stickers filled with more sweat than traditional devices, and did so faster.
"This is exciting technology and something very new," says Edward Fong, a pediatric pulmonologist with Hawaii Pacific Health who was not involved in the study. Fong thinks these stickers would make cystic fibrosis diagnosis more accessible. If it lands regulatory approval, he says, "we do not need to send our patients 2,500 miles away to be able to get their sweat tested."
"Making sweat tests easier would be the one obvious win," agrees Gordon Dexter, a 36-year-old from Maryland who lives with the condition. Dexter is a moderator for the Reddit community r/CysticFibrosis , where people sympathize about digestive hardships and celebrate triumphs over lung bacteria. "Sweat tests can be kind of ambiguous or just difficult to do, and that is a recurring question that I've seen," Dexter says.
Ray has had an eye on sweat for years. In 2016, as a postdoctoral fellow, he joined John Rogers' lab at Northwestern University, where researchers had been toying with conducting sweat analysis on wearable sensors. They wanted to create new devices with tiny channels, valves, and dyes that could track body chemistry in real time. Soon after Ray arrived, the lab published a paper demonstrating a wearable sensor that could reveal glucose, lactate, and chloride ion levels in sweat, as well as its pH. That study pitched the sensors as monitors for athletes or military members in training, and the researchers tested it during a long-distance bike race. The tech got a lot of attention: Ray later worked with sports teams like the Chicago Cubs, and Gatorade has used the technology to sell its Gx Sweat Patch . In 2017, the patches were displayed at New York's Museum of Modern Art and were used to promote hydration at the South by Southwest festival.
Pathologists also noticed. "Right when that paper came out, we were contacted by Lurie Children's Hospital," says Ray. A researcher at the Chicago institution believed this type of sensor could collect enough sweat to give conclusive diagnoses. Ray's team agreed that a wearable could probably collect more sweat faster. And to avoid the geographic barriers that come with needing a lab, they could embed most of the lab analysis steps right on the patch.
Their resulting stickers are circular and about one inch across. They can lie flat, hug the wide curve of an adult arm, or conform to small infant's limbs. (They also look like stickers . Ray's team placed popular cartoon decals on top, hoping to make them even more kid-friendly.) Sweat soaks up through the center and into thin canals that zigzag out to the sticker's edge.
To run the test, a clinician uses a weak electric current to drive a sweat-gland-activating gel called pilocarpine into the patient's skin. This is the standard starting point for sweat tests, but what happens next is different. Five minutes later, the sticker goes on, and the patient's sweat slips into its tiny capillaries for up to 30 minutes. It immediately mixes with a clear, gel-like pool of silver chlorinalite, a chemical that changes color when it bumps into chloride ions. If the sweat doesn't contain these ions, the streams stay clear. But progressively higher ion concentrations quickly turn it a pale pink and then a dark violet. Clinicians then snap a picture of the color change, run the photo through an analysis app, and gauge the chloride levels. | University of Hawaii at Manoa, Northwestern University, and other researchers collaborated on the design of a sticker that changes color as it absorbs progressively higher amounts of salt, which can indicate the presence of cystic fibrosis. The circular sensors lie flat on the skin, absorbing perspiration through its center and into capillaries that extend to its edge. A clinician uses a low electric current to drive a sweat-gland-activating gel into the skin, and when the sticker is applied five minutes later, sweat slips into the sticker's capillaries, where it blends with silver chlorinalite. This chemical changes color in contact with chloride ions, and an application processes a smartphone-captured photo of the sticker. Said Northwestern's John Rogers, "The idea of just being able to sense what's going on in yourself, simply by looking down at your smartphone or a sensor - it's just amazing." | [] | [] | [] | scitechnews | None | None | None | None | University of Hawaii at Manoa, Northwestern University, and other researchers collaborated on the design of a sticker that changes color as it absorbs progressively higher amounts of salt, which can indicate the presence of cystic fibrosis. The circular sensors lie flat on the skin, absorbing perspiration through its center and into capillaries that extend to its edge. A clinician uses a low electric current to drive a sweat-gland-activating gel into the skin, and when the sticker is applied five minutes later, sweat slips into the sticker's capillaries, where it blends with silver chlorinalite. This chemical changes color in contact with chloride ions, and an application processes a smartphone-captured photo of the sticker. Said Northwestern's John Rogers, "The idea of just being able to sense what's going on in yourself, simply by looking down at your smartphone or a sensor - it's just amazing."
In the Middle Ages, a grim adage sometimes turned up in European folklore and children's stories: Woe to that child which when kissed on the forehead tastes salty. He is bewitched and soon must die. A salty-headed newborn was a frightful sign of a mysterious illness. The witchcraft diagnosis didn't hold, of course, but today researchers think that the salty taste warned of the genetic disease we now know as cystic fibrosis.
Cystic fibrosis affects over 30,000 people in the United States, and over 70,000 globally. Mutations in the CFTR gene garble cells' blueprints for making protein tunnels for chloride ions. Chloride's negative charge attracts water, so without much chloride meandering into cells, the body's mucus gets thicker and stickier, making breathing a struggle and often trapping dangerous bacteria in the lungs. It also disrupts digestive enzymes from traveling out of the pancreas and into the gut, causing inflammation and malnutrition.
Salty sweat is a telltale sign. Doctors sometimes meet kids with 10 times higher chloride levels in their sweat than expected. Since the 1960s, measuring chloride has given doctors their clearest diagnoses: They stimulate people's sweat glands, soak up as much as they can, and send the samples to labs. But the tools are expensive, bulky, and hard to fit onto squirming infants. Sometimes the tests don't collect enough fluid for a diagnosis. And if a test fails, parents and their newborn often have to wait a couple of weeks to come back.
"That failure to collect enough sweat just delays time to diagnosis," says Tyler Ray, a mechanical engineer with the University of Hawaii at Mānoa who develops wearable biosensors. That means losing precious weeks when doctors could have prescribed treatments. It also creates a barrier for folks who need to drive for hours - or fly over oceans - to reach a hospital that can run the test. "There are not many throughout the country," says Ray. "In fact, Hawaii does not have one for the general population."
Ray's team of engineers and pathologists think they have an alternative: stick-on sweat collectors. In a study published last week in Science Translational Medicine , they report creating a malleable, coin-sized sticker that changes color as it absorbs progressively higher salt concentrations indicative of cystic fibrosis. When tested on babies and adults, the stickers filled with more sweat than traditional devices, and did so faster.
"This is exciting technology and something very new," says Edward Fong, a pediatric pulmonologist with Hawaii Pacific Health who was not involved in the study. Fong thinks these stickers would make cystic fibrosis diagnosis more accessible. If it lands regulatory approval, he says, "we do not need to send our patients 2,500 miles away to be able to get their sweat tested."
"Making sweat tests easier would be the one obvious win," agrees Gordon Dexter, a 36-year-old from Maryland who lives with the condition. Dexter is a moderator for the Reddit community r/CysticFibrosis , where people sympathize about digestive hardships and celebrate triumphs over lung bacteria. "Sweat tests can be kind of ambiguous or just difficult to do, and that is a recurring question that I've seen," Dexter says.
Ray has had an eye on sweat for years. In 2016, as a postdoctoral fellow, he joined John Rogers' lab at Northwestern University, where researchers had been toying with conducting sweat analysis on wearable sensors. They wanted to create new devices with tiny channels, valves, and dyes that could track body chemistry in real time. Soon after Ray arrived, the lab published a paper demonstrating a wearable sensor that could reveal glucose, lactate, and chloride ion levels in sweat, as well as its pH. That study pitched the sensors as monitors for athletes or military members in training, and the researchers tested it during a long-distance bike race. The tech got a lot of attention: Ray later worked with sports teams like the Chicago Cubs, and Gatorade has used the technology to sell its Gx Sweat Patch . In 2017, the patches were displayed at New York's Museum of Modern Art and were used to promote hydration at the South by Southwest festival.
Pathologists also noticed. "Right when that paper came out, we were contacted by Lurie Children's Hospital," says Ray. A researcher at the Chicago institution believed this type of sensor could collect enough sweat to give conclusive diagnoses. Ray's team agreed that a wearable could probably collect more sweat faster. And to avoid the geographic barriers that come with needing a lab, they could embed most of the lab analysis steps right on the patch.
Their resulting stickers are circular and about one inch across. They can lie flat, hug the wide curve of an adult arm, or conform to small infant's limbs. (They also look like stickers . Ray's team placed popular cartoon decals on top, hoping to make them even more kid-friendly.) Sweat soaks up through the center and into thin canals that zigzag out to the sticker's edge.
To run the test, a clinician uses a weak electric current to drive a sweat-gland-activating gel called pilocarpine into the patient's skin. This is the standard starting point for sweat tests, but what happens next is different. Five minutes later, the sticker goes on, and the patient's sweat slips into its tiny capillaries for up to 30 minutes. It immediately mixes with a clear, gel-like pool of silver chlorinalite, a chemical that changes color when it bumps into chloride ions. If the sweat doesn't contain these ions, the streams stay clear. But progressively higher ion concentrations quickly turn it a pale pink and then a dark violet. Clinicians then snap a picture of the color change, run the photo through an analysis app, and gauge the chloride levels. |
|||
592 | Toyota Unveils Models in Advanced Driver-Assist Technology Push | TOKYO (Reuters) -Toyota Motor Corp unveiled on Thursday new models of Lexus and Mirai in Japan, equipped with advanced driver assistance, as competition heats up to develop more self-driving and connected cars.
Toyota's latest launch comes as automakers, electric car startups and tech giants invest heavily in so-called active safety features.
The Japanese carmaker's new driving assist technology, or Advanced Drive, features a level 2 autonomous system that helps driving, such as limiting the car in its lane, maintaining the distance from other vehicles and changing lanes under the driver's supervision on expressways or other motor-vehicle only roads.
The luxury sedan Lexus LS will be on sale from Thursday, costing between about 16.3 million yen ($148,627.70) and 17.9 million yen, while the second-generation Mirai hydrogen fuel cell car will be offered on April 12 at between 8.4 million and 8.6 million yen.
The new models are Toyota's first products brought to the market that provide over-the-air updates and utilise AI technology centred on deep learning, said Toyota executive James Kuffner, who is also the head of Toyota's research unit Woven Planet.
"This is really an important first step in our journey towards software-first development," he said at an online briefing on Thursday, adding that the company has tried to design the software to be truly global and to provide re-usability.
In the future, software features on cars will be "upgradable" and "more customisable" much like how people personalise their smartphones, Kuffner added.
Fully self-driven cars are still likely to be years away, but rival General Motors Co early this year made a splash at the virtual Consumer Electronics Show with a fully-autonomous all-electric flying Cadillac concept, while Chinese search engine operator Baidu unveiled a partnership with local car brand Geely.
Toyota's domestic competitor, Honda Motor Co Ltd, last month unveiled a partially self-driving Legend sedan in Japan, becoming the world's first carmaker to sell a vehicle equipped with new, certified level 3 automation technology.
($1 = 109.7000 yen) | Japanese automaker Toyota Motor has unveiled the newest models of its Lexus and Mirai vehicles in Japan, outfitted with advanced driver assistance systems (ADAS). Toyota's Advanced Drive solution features a level 2 autonomous system that helps with driving, including keeping the vehicle in its lane, maintaining its distance from other vehicles, and changing lanes safely. Toyota's James Kuffner said the new models are the first the company has brought to market that provide over-the-air updates to its software, and which employ deep learning artificial intelligence. Future cars, Kuffner added, will be "upgradable" and "more customizable." | [] | [] | [] | scitechnews | None | None | None | None | Japanese automaker Toyota Motor has unveiled the newest models of its Lexus and Mirai vehicles in Japan, outfitted with advanced driver assistance systems (ADAS). Toyota's Advanced Drive solution features a level 2 autonomous system that helps with driving, including keeping the vehicle in its lane, maintaining its distance from other vehicles, and changing lanes safely. Toyota's James Kuffner said the new models are the first the company has brought to market that provide over-the-air updates to its software, and which employ deep learning artificial intelligence. Future cars, Kuffner added, will be "upgradable" and "more customizable."
TOKYO (Reuters) -Toyota Motor Corp unveiled on Thursday new models of Lexus and Mirai in Japan, equipped with advanced driver assistance, as competition heats up to develop more self-driving and connected cars.
Toyota's latest launch comes as automakers, electric car startups and tech giants invest heavily in so-called active safety features.
The Japanese carmaker's new driving assist technology, or Advanced Drive, features a level 2 autonomous system that helps driving, such as limiting the car in its lane, maintaining the distance from other vehicles and changing lanes under the driver's supervision on expressways or other motor-vehicle only roads.
The luxury sedan Lexus LS will be on sale from Thursday, costing between about 16.3 million yen ($148,627.70) and 17.9 million yen, while the second-generation Mirai hydrogen fuel cell car will be offered on April 12 at between 8.4 million and 8.6 million yen.
The new models are Toyota's first products brought to the market that provide over-the-air updates and utilise AI technology centred on deep learning, said Toyota executive James Kuffner, who is also the head of Toyota's research unit Woven Planet.
"This is really an important first step in our journey towards software-first development," he said at an online briefing on Thursday, adding that the company has tried to design the software to be truly global and to provide re-usability.
In the future, software features on cars will be "upgradable" and "more customisable" much like how people personalise their smartphones, Kuffner added.
Fully self-driven cars are still likely to be years away, but rival General Motors Co early this year made a splash at the virtual Consumer Electronics Show with a fully-autonomous all-electric flying Cadillac concept, while Chinese search engine operator Baidu unveiled a partnership with local car brand Geely.
Toyota's domestic competitor, Honda Motor Co Ltd, last month unveiled a partially self-driving Legend sedan in Japan, becoming the world's first carmaker to sell a vehicle equipped with new, certified level 3 automation technology.
($1 = 109.7000 yen) |
|||
593 | DNA Breakthrough Could Finally Make Tape Storage Obsolete | A new technology has been developed that allows for digital binary files to be converted into the genetic alphabet, bringing DNA storage one step closer to reality.
Researchers based out of Los Alamos National Laboratory have created a new codec that minimizes the error rate when writing to molecular storage, as well as making any potential issues easier to correct.
"Our software, the Adaptive DNA Storage Codec (ADS Codex), translates data from what a computer understands into what biology understands," explained Latchesar Ionkov, who heads up the project. "It's like translating English to Chinese, only harder."
The Los Alamos team is part of the wider Molecular Information Storage (MIST) program. The immediate goal of the project is to develop DNA storage technologies capable of writing 1TB and reading 10TB within 24 hours, at a cost of less than $1,000.
With all the various kinks ironed out, DNA storage could provide a way to store vast amounts of data at low cost, which will be vital in the coming years as the quantity of data produced continues to expand.
As compared with tape storage, which is used today for archival purposes, DNA is far more dense, degrades nowhere near as quickly and requires no maintenance.
"DNA offers a promising solution compared to tape, the prevailing method of cold storage, which is a technology dating to 1951," said Bradley Settlemyer, another researcher at Los Alamos.
"DNA storage could disrupt the way you think about archival storage, because data retention is so long and the data density so high. You could store all of YouTube in your refrigerator, instead of in acres and acres of data centers."
However, Settlemyer also warned of the various "daunting technological hurdles" that will need to be overcome before DNA storage can be brought to fruition, largely to do with the interoperability of different technologies.
The Los Alamos team focuses specifically on issues surrounding the coding and decoding of information, as binary 0s and 1s are translated into the four-letter (A, C, G and T) genetic alphabet and back again.
The ADS Codex is designed to combat natural errors that occur when additional values are added or accidentally deleted from the series of letters that make up a DNA sequence. When this data is converted back to binary, the codec checks for anomalies and, if one is detected, adds and removes letters from the chain until the data can be verified.
Version 1.0 of the ADS Codex has now been finalized and will soon be used to assess the performance of systems built by other members of the MIST project.
Via Storage Newsletter | Researchers at the U.S. Department of Energy's Los Alamos National Laboratory (LANL) have brought DNA data storage one step closer to realization by converting digital binary files into a genetic alphabet. LANL's Latchesar Ionkov said, "Our software, the Adaptive DNA Storage Codec [ADS Codex], translates data from what a computer understands into what biology understands. It's like translating English to Chinese, only harder." To combat natural errors that occur when additional values are added or accidentally erased from the letters composing a DNA sequence, when the data is converted back to binary, the codec looks for anomalies and, if one is spotted, adds and subtracts letters from the chain until the data can be verified. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the U.S. Department of Energy's Los Alamos National Laboratory (LANL) have brought DNA data storage one step closer to realization by converting digital binary files into a genetic alphabet. LANL's Latchesar Ionkov said, "Our software, the Adaptive DNA Storage Codec [ADS Codex], translates data from what a computer understands into what biology understands. It's like translating English to Chinese, only harder." To combat natural errors that occur when additional values are added or accidentally erased from the letters composing a DNA sequence, when the data is converted back to binary, the codec looks for anomalies and, if one is spotted, adds and subtracts letters from the chain until the data can be verified.
A new technology has been developed that allows for digital binary files to be converted into the genetic alphabet, bringing DNA storage one step closer to reality.
Researchers based out of Los Alamos National Laboratory have created a new codec that minimizes the error rate when writing to molecular storage, as well as making any potential issues easier to correct.
"Our software, the Adaptive DNA Storage Codec (ADS Codex), translates data from what a computer understands into what biology understands," explained Latchesar Ionkov, who heads up the project. "It's like translating English to Chinese, only harder."
The Los Alamos team is part of the wider Molecular Information Storage (MIST) program. The immediate goal of the project is to develop DNA storage technologies capable of writing 1TB and reading 10TB within 24 hours, at a cost of less than $1,000.
With all the various kinks ironed out, DNA storage could provide a way to store vast amounts of data at low cost, which will be vital in the coming years as the quantity of data produced continues to expand.
As compared with tape storage, which is used today for archival purposes, DNA is far more dense, degrades nowhere near as quickly and requires no maintenance.
"DNA offers a promising solution compared to tape, the prevailing method of cold storage, which is a technology dating to 1951," said Bradley Settlemyer, another researcher at Los Alamos.
"DNA storage could disrupt the way you think about archival storage, because data retention is so long and the data density so high. You could store all of YouTube in your refrigerator, instead of in acres and acres of data centers."
However, Settlemyer also warned of the various "daunting technological hurdles" that will need to be overcome before DNA storage can be brought to fruition, largely to do with the interoperability of different technologies.
The Los Alamos team focuses specifically on issues surrounding the coding and decoding of information, as binary 0s and 1s are translated into the four-letter (A, C, G and T) genetic alphabet and back again.
The ADS Codex is designed to combat natural errors that occur when additional values are added or accidentally deleted from the series of letters that make up a DNA sequence. When this data is converted back to binary, the codec checks for anomalies and, if one is detected, adds and removes letters from the chain until the data can be verified.
Version 1.0 of the ADS Codex has now been finalized and will soon be used to assess the performance of systems built by other members of the MIST project.
Via Storage Newsletter |
|||
595 | Robo-Starfish Aims to Enable Closer Study of Aquatic Life | Biologists have long experienced the challenges of documenting ocean life, with many species of fish proving quite sensitive to the underwater movements of humans.
As a possible solution, computer scientists have been developing special marine robots that can stealthily move among their carbon-based counterparts: in 2018, for example, a team from MIT's Computer Science and Artificial Intelligence Lab (CSAIL) fabricated a soft robotic fish that autonomously swam with real fish along the coral reefs of Fiji.
However, the complex dynamics of how water moves - and its ability to quickly ruin some perfectly good electronics systems - have made underwater robots especially difficult to develop compared to ones for air or land. With the fish, the CSAIL team had to go through months of trial and error to manually tweak the design so that it could actually reliably work in the water.
While that robot was an especially complex one, a group led by MIT professors Wojciech Matusik and Daniela Rus still felt that there was room to speed up the production process. With that in mind, they have now created a new tool for simulating and fabricating a functional soft robot in a matter of hours.
The team used their system to make a soft robotic starfish made out of silicon foam and capable of moving with a single low-powered actuator. The starfish moves via tendons in its four legs, which are connected to a servo motor that's used to flex and relax the legs.
"The passive interactions between an underwater robot and the fluid forces around it - whether it's a calm current or an undulating wave - are much more complicated than when a robot is walking on stable terrain, which makes creating its control systems quite difficult," says CSAIL postdoc Josephine Hughes, co-lead author of a new paper alongside PhD student Tao Du about the starfish. "But using this simulator, a process that might normally take days or weeks can happen in just a few hours."
Du says that the team chose a starfish design because of the simplicity and elegance of its motion, with the squeezing and releasing of its legs creating forward movement. However, the team found that the simulator works for a range of body types, and so they will next be exploring designs inspired by sea turtles, manta rays and sharks that involve more complex structures such as joints, fins and flippers.
The group's tool involves a machine learning model doing an initial simulation and design of the control mechanisms of the robot, which is then rapidly fabricated. Real-world experiments with the robot are then used to acquire more data to repeatedly improve and optimize its design. The result is that the robot typically only has to be re-fabricated one more time. (A separate paper about the development of the simulation tool is currently under review.)
"When doing robotic simulation, we have to make approximations that, by definition, create a gap between simulation and reality," says Cecilia Laschi, a professor of control and mechatronics at the National University of Singapore who was not involved in the research. "This work is intended to reduce that reality gap, with a mixed-loop of simulated and real experiments that's quite effective."
For the starfish's body the team used silicone foam because of its elastic properties, natural buoyancy, and ability to be fabricated quickly and easily In experiments the researchers found that the starfish could move through the water four times faster than when using a controller hand-crafted by a human expert.
Indeed, Hughes says that the team discovered that the simulator seems to employ control strategies that humans would not have thought of themselves.
"With the robot starfish we learned that, in addition to those quite visible leg propulsions they do, there are some subtler high-frequency movements that can give them important momentum," Hughes says.
The project builds off of a series of CSAIL projects focused on soft robots, which Rus says have the potential to be safer, sturdier and more nimble than their rigid-bodied counterparts. Researchers have i ncreasingly turned to soft robots for e nvironments that require movi ng through tight quarters, since such robots are more resilient in being able to recover from collisions. Laschi says that the team's tool could be used to develop robots for measuring data at different locations in the deep ocean, and for generally envisioning robots that can move in new ways that researchers haven't yet thought of.
"Bio-inspired robots like the starfish robot and SoFi can get closer to marine life without disturbing it," says Rus. "In the future, by rapidly designing and building bio-inspired robotic instruments, it will be possible to create custom observatories that can be deployed in the wilderness to observe life."
Du and Hughes co-wrote the paper with Matusik, Rus, and MIT undergrad Sebastien Wah. The paper was published this week in the Journal of Robotics Automation Letters, and will also be presented virtually next month at IEEE's International Conference on Soft Robotics (RoboSoft). | Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Lab (CSAIL) have developed a tool to simulate and fabricate a functional soft robot, which they used to create a soft robotic starfish to study aquatic life. The tool aims to address the challenges of designing an effective underwater robot given the movement of water, and speed up the process for producing one. The tool uses a machine learning model to perform an initial simulation and design of control mechanisms and quickly fabricate the robot, after which real-world experiments are conducted to generate more data to optimize the design. The process generally requires that the robot be re-fabricated once. CSAIL's Josephine Hughes said the simulator used control strategies that humans would not have considered. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Lab (CSAIL) have developed a tool to simulate and fabricate a functional soft robot, which they used to create a soft robotic starfish to study aquatic life. The tool aims to address the challenges of designing an effective underwater robot given the movement of water, and speed up the process for producing one. The tool uses a machine learning model to perform an initial simulation and design of control mechanisms and quickly fabricate the robot, after which real-world experiments are conducted to generate more data to optimize the design. The process generally requires that the robot be re-fabricated once. CSAIL's Josephine Hughes said the simulator used control strategies that humans would not have considered.
Biologists have long experienced the challenges of documenting ocean life, with many species of fish proving quite sensitive to the underwater movements of humans.
As a possible solution, computer scientists have been developing special marine robots that can stealthily move among their carbon-based counterparts: in 2018, for example, a team from MIT's Computer Science and Artificial Intelligence Lab (CSAIL) fabricated a soft robotic fish that autonomously swam with real fish along the coral reefs of Fiji.
However, the complex dynamics of how water moves - and its ability to quickly ruin some perfectly good electronics systems - have made underwater robots especially difficult to develop compared to ones for air or land. With the fish, the CSAIL team had to go through months of trial and error to manually tweak the design so that it could actually reliably work in the water.
While that robot was an especially complex one, a group led by MIT professors Wojciech Matusik and Daniela Rus still felt that there was room to speed up the production process. With that in mind, they have now created a new tool for simulating and fabricating a functional soft robot in a matter of hours.
The team used their system to make a soft robotic starfish made out of silicon foam and capable of moving with a single low-powered actuator. The starfish moves via tendons in its four legs, which are connected to a servo motor that's used to flex and relax the legs.
"The passive interactions between an underwater robot and the fluid forces around it - whether it's a calm current or an undulating wave - are much more complicated than when a robot is walking on stable terrain, which makes creating its control systems quite difficult," says CSAIL postdoc Josephine Hughes, co-lead author of a new paper alongside PhD student Tao Du about the starfish. "But using this simulator, a process that might normally take days or weeks can happen in just a few hours."
Du says that the team chose a starfish design because of the simplicity and elegance of its motion, with the squeezing and releasing of its legs creating forward movement. However, the team found that the simulator works for a range of body types, and so they will next be exploring designs inspired by sea turtles, manta rays and sharks that involve more complex structures such as joints, fins and flippers.
The group's tool involves a machine learning model doing an initial simulation and design of the control mechanisms of the robot, which is then rapidly fabricated. Real-world experiments with the robot are then used to acquire more data to repeatedly improve and optimize its design. The result is that the robot typically only has to be re-fabricated one more time. (A separate paper about the development of the simulation tool is currently under review.)
"When doing robotic simulation, we have to make approximations that, by definition, create a gap between simulation and reality," says Cecilia Laschi, a professor of control and mechatronics at the National University of Singapore who was not involved in the research. "This work is intended to reduce that reality gap, with a mixed-loop of simulated and real experiments that's quite effective."
For the starfish's body the team used silicone foam because of its elastic properties, natural buoyancy, and ability to be fabricated quickly and easily In experiments the researchers found that the starfish could move through the water four times faster than when using a controller hand-crafted by a human expert.
Indeed, Hughes says that the team discovered that the simulator seems to employ control strategies that humans would not have thought of themselves.
"With the robot starfish we learned that, in addition to those quite visible leg propulsions they do, there are some subtler high-frequency movements that can give them important momentum," Hughes says.
The project builds off of a series of CSAIL projects focused on soft robots, which Rus says have the potential to be safer, sturdier and more nimble than their rigid-bodied counterparts. Researchers have i ncreasingly turned to soft robots for e nvironments that require movi ng through tight quarters, since such robots are more resilient in being able to recover from collisions. Laschi says that the team's tool could be used to develop robots for measuring data at different locations in the deep ocean, and for generally envisioning robots that can move in new ways that researchers haven't yet thought of.
"Bio-inspired robots like the starfish robot and SoFi can get closer to marine life without disturbing it," says Rus. "In the future, by rapidly designing and building bio-inspired robotic instruments, it will be possible to create custom observatories that can be deployed in the wilderness to observe life."
Du and Hughes co-wrote the paper with Matusik, Rus, and MIT undergrad Sebastien Wah. The paper was published this week in the Journal of Robotics Automation Letters, and will also be presented virtually next month at IEEE's International Conference on Soft Robotics (RoboSoft). |
|||
597 | Computer Model Fosters Potential Improvements to 'Bionic Eye' Technology | By Wayne Lewis
There are millions of people who face the loss of their eyesight from degenerative eye diseases. The genetic disorder retinitis pigmentosa alone affects 1 in 4,000 people worldwide.
Today, there is technology available to offer partial eyesight to people with that syndrome. The Argus II, the world's first retinal prosthesis, reproduces some functions of a part of the eye essential to vision, to allow users to perceive movement and shapes.
While the field of retinal prostheses is still in its infancy, for hundreds of users around the globe, the "bionic eye" enriches the way they interact with the world on a daily basis. For instance, seeing outlines of objects enables them to move around unfamiliar environments with increased safety.
That is just the start. Researchers are seeking future improvements upon the technology, with an ambitious objective in mind.
"Our goal now is to develop systems that truly mimic the complexity of the retina," said Gianluca Lazzi , PhD, MBA, a Provost Professor of Ophthalmology and Electrical Engineering at the Keck School of Medicine of USC and the USC Viterbi School of Engineering .
He and his USC colleagues cultivated progress with a pair of recent studies using an advanced computer model of what happens in the retina. Their experimentally validated model reproduces the shapes and positions of millions of nerve cells in the eye, as well as the physical and networking properties associated with them.
"Things that we couldn't even see before, we can now model," said Lazzi, who is also the Fred H. Cole Professor in Engineering and director of the USC Institute for Technology and Medical Systems . "We can mimic the behavior of the neural systems, so we can truly understand why the neural system does what it does."
Focusing on models of nerve cells that transmit visual information from the eye to the brain, the researchers identified ways to potentially increase clarity and grant color vision to future retinal prosthetic devices.
The eye, bionic and otherwise
To understand how the computer model could improve the bionic eye, it helps to know a little about how vision happens and how the prosthesis works.
When light enters the healthy eye, the lens focuses it onto the retina, at the back of the eye. Cells called photoreceptors translate the light into electrical impulses that are processed by other cells in the retina. After processing, the signals are passed along to ganglion cells, which deliver information from retina to brain through long tails, called axons, that are bundled together to make up the optic nerve.
Photoreceptors and processing cells die off in degenerative eye diseases. Retinal ganglion cells typically remain functional longer; the Argus II delivers signals directly to those cells.
"In these unfortunate conditions, there is no longer a good set of inputs to the ganglion cell," Lazzi said. "As engineers, we ask how we can provide that electrical input."
A patient receives a tiny eye implant with an array of electrodes. Those electrodes are remotely activated when a signal is transmitted from a pair of special glasses that have a camera on them. The patterns of light detected by the camera determine which retinal ganglion cells are activated by the electrodes, sending a signal to the brain that results in the perception of a black-and-white image comprising 60 dots.
Computer model courts new advances
Under certain conditions, an electrode in the implant will incidentally stimulate the axons of cells neighboring its target. For the user of the bionic eye, this off-target stimulation of axons results in the perception of an elongated shape instead of a dot. In a study published in IEEE Transactions on Neural Systems and Rehabilitation Engineering, Lazzi and his colleagues deployed the computer model to address this issue.
"You want to activate this cell, but not the neighboring axon," Lazzi said. "So we tried to design an electrical stimulation waveform that more precisely targets the cell."
The researchers used models for two subtypes of retinal ganglion cells, at the single-cell level as well as in huge networks. They identified a pattern of short pulses that preferentially targets cell bodies, with less off-target activation of axons.
Another recent study in the journal Scientific Reports applied the same computer modeling system to the same two cell subtypes to investigate how to encode color.
This research builds upon earlier investigations showing that people using the Argus II perceive variations in color with changes in the frequency of the electrical signal - the number of times the signal repeats over a given duration. Using the model, Lazzi and his colleagues developed a strategy for adjusting the signal's frequency to create the perception of the color blue.
Beyond the possibility of adding color vision to the bionic eye, encoding with hues could be combined with artificial intelligence in future advances based on the system, so that particularly important elements in a person's surroundings, such as faces or doorways, stand out.
"There's a long road, but we're walking in the right direction," Lazzi said. "We can gift these prosthetics with intelligence, and with knowledge comes power."
About the studies
Both studies were conducted by the same USC research team. The first author on both is Javad Paknahad, an electrical engineering graduate student. Other authors are Kyle Loizos and Dr. Mark Humayun, co-inventor of the Argus II retinal prosthesis.
The Scientific Reports study was supported by the National Science Foundation (1833288), the National Institutes of Health (R21EY028744, U01EB025830) and Research to Prevent Blindness.
Disclosure:
Mark Humayun, MD, PhD, is a co-inventor of the Argus implant series and receives royalty payment. | Researchers at the Keck School of Medicine of the University of Southern California used an advanced computer model to mimic the human retina, in order to improve prosthetic eye technology. The model replicates the shapes and positions of millions of nerve cells in the eye, along with their associated physical and networking characteristics. The Keck team focused on nerve cells that send visual information from the eye to the brain, and identified potential ways to boost clarity and grant color vision to future retinal prostheses. Said Keck's Gianluca Lazzi, "There's a long road, but we're walking in the right direction. We can gift these prosthetics with intelligence, and with knowledge comes power." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the Keck School of Medicine of the University of Southern California used an advanced computer model to mimic the human retina, in order to improve prosthetic eye technology. The model replicates the shapes and positions of millions of nerve cells in the eye, along with their associated physical and networking characteristics. The Keck team focused on nerve cells that send visual information from the eye to the brain, and identified potential ways to boost clarity and grant color vision to future retinal prostheses. Said Keck's Gianluca Lazzi, "There's a long road, but we're walking in the right direction. We can gift these prosthetics with intelligence, and with knowledge comes power."
By Wayne Lewis
There are millions of people who face the loss of their eyesight from degenerative eye diseases. The genetic disorder retinitis pigmentosa alone affects 1 in 4,000 people worldwide.
Today, there is technology available to offer partial eyesight to people with that syndrome. The Argus II, the world's first retinal prosthesis, reproduces some functions of a part of the eye essential to vision, to allow users to perceive movement and shapes.
While the field of retinal prostheses is still in its infancy, for hundreds of users around the globe, the "bionic eye" enriches the way they interact with the world on a daily basis. For instance, seeing outlines of objects enables them to move around unfamiliar environments with increased safety.
That is just the start. Researchers are seeking future improvements upon the technology, with an ambitious objective in mind.
"Our goal now is to develop systems that truly mimic the complexity of the retina," said Gianluca Lazzi , PhD, MBA, a Provost Professor of Ophthalmology and Electrical Engineering at the Keck School of Medicine of USC and the USC Viterbi School of Engineering .
He and his USC colleagues cultivated progress with a pair of recent studies using an advanced computer model of what happens in the retina. Their experimentally validated model reproduces the shapes and positions of millions of nerve cells in the eye, as well as the physical and networking properties associated with them.
"Things that we couldn't even see before, we can now model," said Lazzi, who is also the Fred H. Cole Professor in Engineering and director of the USC Institute for Technology and Medical Systems . "We can mimic the behavior of the neural systems, so we can truly understand why the neural system does what it does."
Focusing on models of nerve cells that transmit visual information from the eye to the brain, the researchers identified ways to potentially increase clarity and grant color vision to future retinal prosthetic devices.
The eye, bionic and otherwise
To understand how the computer model could improve the bionic eye, it helps to know a little about how vision happens and how the prosthesis works.
When light enters the healthy eye, the lens focuses it onto the retina, at the back of the eye. Cells called photoreceptors translate the light into electrical impulses that are processed by other cells in the retina. After processing, the signals are passed along to ganglion cells, which deliver information from retina to brain through long tails, called axons, that are bundled together to make up the optic nerve.
Photoreceptors and processing cells die off in degenerative eye diseases. Retinal ganglion cells typically remain functional longer; the Argus II delivers signals directly to those cells.
"In these unfortunate conditions, there is no longer a good set of inputs to the ganglion cell," Lazzi said. "As engineers, we ask how we can provide that electrical input."
A patient receives a tiny eye implant with an array of electrodes. Those electrodes are remotely activated when a signal is transmitted from a pair of special glasses that have a camera on them. The patterns of light detected by the camera determine which retinal ganglion cells are activated by the electrodes, sending a signal to the brain that results in the perception of a black-and-white image comprising 60 dots.
Computer model courts new advances
Under certain conditions, an electrode in the implant will incidentally stimulate the axons of cells neighboring its target. For the user of the bionic eye, this off-target stimulation of axons results in the perception of an elongated shape instead of a dot. In a study published in IEEE Transactions on Neural Systems and Rehabilitation Engineering, Lazzi and his colleagues deployed the computer model to address this issue.
"You want to activate this cell, but not the neighboring axon," Lazzi said. "So we tried to design an electrical stimulation waveform that more precisely targets the cell."
The researchers used models for two subtypes of retinal ganglion cells, at the single-cell level as well as in huge networks. They identified a pattern of short pulses that preferentially targets cell bodies, with less off-target activation of axons.
Another recent study in the journal Scientific Reports applied the same computer modeling system to the same two cell subtypes to investigate how to encode color.
This research builds upon earlier investigations showing that people using the Argus II perceive variations in color with changes in the frequency of the electrical signal - the number of times the signal repeats over a given duration. Using the model, Lazzi and his colleagues developed a strategy for adjusting the signal's frequency to create the perception of the color blue.
Beyond the possibility of adding color vision to the bionic eye, encoding with hues could be combined with artificial intelligence in future advances based on the system, so that particularly important elements in a person's surroundings, such as faces or doorways, stand out.
"There's a long road, but we're walking in the right direction," Lazzi said. "We can gift these prosthetics with intelligence, and with knowledge comes power."
About the studies
Both studies were conducted by the same USC research team. The first author on both is Javad Paknahad, an electrical engineering graduate student. Other authors are Kyle Loizos and Dr. Mark Humayun, co-inventor of the Argus II retinal prosthesis.
The Scientific Reports study was supported by the National Science Foundation (1833288), the National Institutes of Health (R21EY028744, U01EB025830) and Research to Prevent Blindness.
Disclosure:
Mark Humayun, MD, PhD, is a co-inventor of the Argus implant series and receives royalty payment. |
|||
598 | The People in This Medical Research Are Fake. The Innovations Are Real. | Researchers in Israel were happy to get their hands on data about thousands of Covid-19 patients, including a 63-year-old father of two who was admitted to the emergency room with Covid-19 and soon recovered. It was the early days of the coronavirus pandemic and the treatments used for this patient could provide invaluable insight into the then little-understood virus.
Normally, it would have been unthinkable to share sensitive medical details, such as the patient's use of Lipitor for high cholesterol, so quickly, without taking measures to safeguard his privacy. But this man wasn't real. He was a fake patient created by algorithms that take details from real-life data sets such as electronic medical records, scramble them and piece them back together to create artificial patient populations that largely mirror the real thing but don't include any real patients. | Medical researchers and data scientists are generating artificial patients algorithmically from real-life datasets to accelerate the development of innovations with real-world applications. Allan Tucker at the U.K.'s Brunel University London said, "The key advantage that synthetic data offers for healthcare is a large reduction in privacy risks that have bugged numerous projects [and] to open up healthcare data for the research and development of new technologies." The Covid-19 pandemic fueled demand for synthetic-data solutions as medical providers and researchers raced to understand the pathogen and develop treatments. Israel is a major testbed, using the MDClone startup's platform for creating synthetic data from medical records, for example. Not all synthetic-data research relies on real-life medical records: U.S. nonprofit Mitre's open source Synthea tool can generate populations of artificial patients from scratch, using publicly available data sources. | [] | [] | [] | scitechnews | None | None | None | None | Medical researchers and data scientists are generating artificial patients algorithmically from real-life datasets to accelerate the development of innovations with real-world applications. Allan Tucker at the U.K.'s Brunel University London said, "The key advantage that synthetic data offers for healthcare is a large reduction in privacy risks that have bugged numerous projects [and] to open up healthcare data for the research and development of new technologies." The Covid-19 pandemic fueled demand for synthetic-data solutions as medical providers and researchers raced to understand the pathogen and develop treatments. Israel is a major testbed, using the MDClone startup's platform for creating synthetic data from medical records, for example. Not all synthetic-data research relies on real-life medical records: U.S. nonprofit Mitre's open source Synthea tool can generate populations of artificial patients from scratch, using publicly available data sources.
Researchers in Israel were happy to get their hands on data about thousands of Covid-19 patients, including a 63-year-old father of two who was admitted to the emergency room with Covid-19 and soon recovered. It was the early days of the coronavirus pandemic and the treatments used for this patient could provide invaluable insight into the then little-understood virus.
Normally, it would have been unthinkable to share sensitive medical details, such as the patient's use of Lipitor for high cholesterol, so quickly, without taking measures to safeguard his privacy. But this man wasn't real. He was a fake patient created by algorithms that take details from real-life data sets such as electronic medical records, scramble them and piece them back together to create artificial patient populations that largely mirror the real thing but don't include any real patients. |
|||
599 | As Locusts Swarmed East Africa, This Tech Helped Squash Them | But as bad as 2020's swarms were, they and their offspring could have caused much worse damage. While the weather has helped slow the insects' reproduction, the success, Mr. Cressman said, has primarily resulted from a technology-driven anti-locust operation that hastily formed in the chaotic months following the insects' arrival to East Africa. This groundbreaking approach proved so effective at clamping down on the winged invaders in some places that some experts say it could transform management of other natural disasters around the world.
"We'd better not let this crisis go to waste," said David Hughes, an entomologist at Penn State University. "We should use this lesson as a way not just to be adapted to the next locust crisis, but to climate change, generally."
Desert locusts are the Dr. Jekylls and Mr. Hydes of the insect world. Normally, the grasshopper-like plant eaters spend their time living solitarily across the deserts of North Africa, Southwest Asia and the Middle East. But when rains arrive, they change from a muted brown into a fiery yellow and become gregarious, forming groups of more than 15 million insects per square mile. Such a swarm can consume the equivalent amount of food in a single day as more than 13,000 people.
The locust plague that hit East Africa in 2020 was two years in the making. In 2018, two major cyclones dumped rain in a remote area of Saudi Arabia, leading to an 8,000-fold increase in desert locust numbers. By mid-2019, winds had pushed the insects into the Horn of Africa, where a wet autumn further boosted their population. An unusual cyclone in Somalia in early December finally tipped the situation into a true emergency. | A 2020 locust plague in East Africa was mitigated by technology-driven countermeasures, spearheaded by Keith Cressman at the United Nations Food and Agriculture Organization (FAO) and Pennsylvania State University's David Hughes. The partners created the eLocust3m application for collecting dependable and detailed locust data, using a mobile tracking tool Hughes previously created with the FAO as a template. The smartphone-enabled app presents photos of locusts at different developmental stages, so users can diagnose what they observe in the field; global positioning system coordinates are automatically recorded, and algorithms double-check photos submitted with each entry. Wildlife-focused security and logistics company 51 Degrees repurposed anti-poaching aerial surveys to find and kill locust swarms, using a customized version of the EarthRanger program from philanthropic firm Vulcan. The program integrated data from the eLocust programs and the computer loggers on pesticide sprayers. | [] | [] | [] | scitechnews | None | None | None | None | A 2020 locust plague in East Africa was mitigated by technology-driven countermeasures, spearheaded by Keith Cressman at the United Nations Food and Agriculture Organization (FAO) and Pennsylvania State University's David Hughes. The partners created the eLocust3m application for collecting dependable and detailed locust data, using a mobile tracking tool Hughes previously created with the FAO as a template. The smartphone-enabled app presents photos of locusts at different developmental stages, so users can diagnose what they observe in the field; global positioning system coordinates are automatically recorded, and algorithms double-check photos submitted with each entry. Wildlife-focused security and logistics company 51 Degrees repurposed anti-poaching aerial surveys to find and kill locust swarms, using a customized version of the EarthRanger program from philanthropic firm Vulcan. The program integrated data from the eLocust programs and the computer loggers on pesticide sprayers.
But as bad as 2020's swarms were, they and their offspring could have caused much worse damage. While the weather has helped slow the insects' reproduction, the success, Mr. Cressman said, has primarily resulted from a technology-driven anti-locust operation that hastily formed in the chaotic months following the insects' arrival to East Africa. This groundbreaking approach proved so effective at clamping down on the winged invaders in some places that some experts say it could transform management of other natural disasters around the world.
"We'd better not let this crisis go to waste," said David Hughes, an entomologist at Penn State University. "We should use this lesson as a way not just to be adapted to the next locust crisis, but to climate change, generally."
Desert locusts are the Dr. Jekylls and Mr. Hydes of the insect world. Normally, the grasshopper-like plant eaters spend their time living solitarily across the deserts of North Africa, Southwest Asia and the Middle East. But when rains arrive, they change from a muted brown into a fiery yellow and become gregarious, forming groups of more than 15 million insects per square mile. Such a swarm can consume the equivalent amount of food in a single day as more than 13,000 people.
The locust plague that hit East Africa in 2020 was two years in the making. In 2018, two major cyclones dumped rain in a remote area of Saudi Arabia, leading to an 8,000-fold increase in desert locust numbers. By mid-2019, winds had pushed the insects into the Horn of Africa, where a wet autumn further boosted their population. An unusual cyclone in Somalia in early December finally tipped the situation into a true emergency. |
|||
600 | Monkey Equipped with Elon Musk's Neuralink Device Plays Pong with Its Brain | Elon Musk's Neuralink, one of his many companies and the only one currently focused on mind control (that we're aware of), has released a new blog post and video detailing some of its recent updates - including using its hardware to make it possible for a monkey to play Pong with only its brain.
In the video above, Neuralink demonstrates how it used its sensor hardware and brain implant to record a baseline of activity from this macaque (named "Pager") as it played a game on-screen where it had to move a token to different squares using a joystick with its hand. Using that baseline data, Neuralink was able to use machine learning to anticipate where Pager was going to be moving the physical controller, and was eventually able to predict it accurately before the move was actually made. Researchers then removed the paddle entirely, and eventually did the same thing with Pong, ultimately ending up at a place where Pager no longer was even moving its hand on the air on the nonexistent paddle, and was instead controlling the in-game action entirely with its mind via the Link hardware and embedded neural threads.
Read more on TechCrunch Douglas Rushkoff on 'Team Human' and fighting for our place in the future Deep Science: Robot perception, acoustic monitoring, using ML to detect arthritis Take a closer look at Elon Musk's Neuralink surgical robot Elon Musk's Neuralink wants to boost the brain to keep up with AI
The last we saw of Neuralink, Musk himself was demonstrating the Link tech live in August 2020, using pigs to show how it was able to read signals from the brain depending on different stimuli. This new demo with Pager more clearly outlines the direction that the tech is headed in terms of human applications, since, as the company shared on its blog, the same technology could be used to help patients with paralysis manipulate a cursor on a computer, for instance. That could be applied to other paradigms as well, including touch controls on an iPhone, and even typing using a virtual keyboard, according to the company.
Musk separately tweeted that in fact, he expects the initial version of Neuralink's product to be able to allow someone with paralysis that prevents standard modes of phone interaction to use one faster than people using their thumbs for input. He also added that future iterations of the product would be able to enable communication between Neuralinks in different parts of a patient's body, transmitting between an in-brain node and neural pathways in legs, for instance, making it possible for "paraplegics to walk again."
These are obviously bold claims, but the company cites a lot of existing research that undergirds its existing demonstrations and near-term goals. Musk's more ambitious claims, should, like all of his projections, definitely be taken with a healthy dose of skepticism. He did add that he hopes human trials will begin to get underway "hopefully later this year," for instance - which is already two years later than he was initially anticipating those might start . | Elon Musk's Neuralink company released a blog post and video displaying a monkey playing the game of Pong by thought, via the firm's sensor hardware and brain implant. The technology recorded a baseline of activity from a macaque playing a game onscreen where it had to move a token to different squares using a joystick. Neuralink then employed machine learning to predict where the monkey would be moving the controller, and was eventually able to anticipate this accurately before the move was made. Researchers eliminated the paddle entirely and repurposed Pong to enable the animal to control in-game action entirely with its mind. Neuralink envisions the technology helping paralyzed patients, and Musk suggested future versions would facilitate communication between Neuralinks in different parts of a patient's body. | [] | [] | [] | scitechnews | None | None | None | None | Elon Musk's Neuralink company released a blog post and video displaying a monkey playing the game of Pong by thought, via the firm's sensor hardware and brain implant. The technology recorded a baseline of activity from a macaque playing a game onscreen where it had to move a token to different squares using a joystick. Neuralink then employed machine learning to predict where the monkey would be moving the controller, and was eventually able to anticipate this accurately before the move was made. Researchers eliminated the paddle entirely and repurposed Pong to enable the animal to control in-game action entirely with its mind. Neuralink envisions the technology helping paralyzed patients, and Musk suggested future versions would facilitate communication between Neuralinks in different parts of a patient's body.
Elon Musk's Neuralink, one of his many companies and the only one currently focused on mind control (that we're aware of), has released a new blog post and video detailing some of its recent updates - including using its hardware to make it possible for a monkey to play Pong with only its brain.
In the video above, Neuralink demonstrates how it used its sensor hardware and brain implant to record a baseline of activity from this macaque (named "Pager") as it played a game on-screen where it had to move a token to different squares using a joystick with its hand. Using that baseline data, Neuralink was able to use machine learning to anticipate where Pager was going to be moving the physical controller, and was eventually able to predict it accurately before the move was actually made. Researchers then removed the paddle entirely, and eventually did the same thing with Pong, ultimately ending up at a place where Pager no longer was even moving its hand on the air on the nonexistent paddle, and was instead controlling the in-game action entirely with its mind via the Link hardware and embedded neural threads.
Read more on TechCrunch Douglas Rushkoff on 'Team Human' and fighting for our place in the future Deep Science: Robot perception, acoustic monitoring, using ML to detect arthritis Take a closer look at Elon Musk's Neuralink surgical robot Elon Musk's Neuralink wants to boost the brain to keep up with AI
The last we saw of Neuralink, Musk himself was demonstrating the Link tech live in August 2020, using pigs to show how it was able to read signals from the brain depending on different stimuli. This new demo with Pager more clearly outlines the direction that the tech is headed in terms of human applications, since, as the company shared on its blog, the same technology could be used to help patients with paralysis manipulate a cursor on a computer, for instance. That could be applied to other paradigms as well, including touch controls on an iPhone, and even typing using a virtual keyboard, according to the company.
Musk separately tweeted that in fact, he expects the initial version of Neuralink's product to be able to allow someone with paralysis that prevents standard modes of phone interaction to use one faster than people using their thumbs for input. He also added that future iterations of the product would be able to enable communication between Neuralinks in different parts of a patient's body, transmitting between an in-brain node and neural pathways in legs, for instance, making it possible for "paraplegics to walk again."
These are obviously bold claims, but the company cites a lot of existing research that undergirds its existing demonstrations and near-term goals. Musk's more ambitious claims, should, like all of his projections, definitely be taken with a healthy dose of skepticism. He did add that he hopes human trials will begin to get underway "hopefully later this year," for instance - which is already two years later than he was initially anticipating those might start . |
|||
601 | Computational Language Models Can Further Environmental Degradation, Language Bias | Natural language processing (NLP) technology used for modeling and predicting language patterns can promote linguistic bias and damage the environment, according to University of Washington (UW) researchers. NLP utilizes large-scale pattern recognition to generate predictive language models, and UW's Emily M. Bender said such models manifest in predictive text and autocorrect features. Although the algorithms are trained on vast datasets from the Internet to recognize patterns, the Internet's scale does not ensure diversity; when considering people who lack or shun Internet access and the weeding out of certain words, Bender said the datasets can exclude underrepresented voices. The UW study also determined that biases and abusive language patterns that perpetuate racism, sexism, or other harmful perspectives can be picked up in the training data. | [] | [] | [] | scitechnews | None | None | None | None | Natural language processing (NLP) technology used for modeling and predicting language patterns can promote linguistic bias and damage the environment, according to University of Washington (UW) researchers. NLP utilizes large-scale pattern recognition to generate predictive language models, and UW's Emily M. Bender said such models manifest in predictive text and autocorrect features. Although the algorithms are trained on vast datasets from the Internet to recognize patterns, the Internet's scale does not ensure diversity; when considering people who lack or shun Internet access and the weeding out of certain words, Bender said the datasets can exclude underrepresented voices. The UW study also determined that biases and abusive language patterns that perpetuate racism, sexism, or other harmful perspectives can be picked up in the training data.
|
||||
602 | Not OK, Computer: Music Streaming's Diversity Problem | Female artists represented just 25% of the music listened to by users of a streaming service, according to researchers at the Netherlands' Utrecht University and Spain's Universitat Pompeu Fabra. The researchers said their analysis of publicly available listening records of 330,000 users of a single streaming service revealed that "on average, the first recommended track was by a man, along with the next six. Users had to wait until song seven or eight to hear one by a woman." Streaming service algorithms recommend music based on what has been listened to before, which creates a vicious feedback cycle if it already offers more music by men. The researchers simulated and modified the algorithm to elevate the rankings of women while lowering those of men, which created a new feedback loop. The algorithm recommended female artists earlier, increasing user awareness so the program would recommend female artists more often when that content was selected. | [] | [] | [] | scitechnews | None | None | None | None | Female artists represented just 25% of the music listened to by users of a streaming service, according to researchers at the Netherlands' Utrecht University and Spain's Universitat Pompeu Fabra. The researchers said their analysis of publicly available listening records of 330,000 users of a single streaming service revealed that "on average, the first recommended track was by a man, along with the next six. Users had to wait until song seven or eight to hear one by a woman." Streaming service algorithms recommend music based on what has been listened to before, which creates a vicious feedback cycle if it already offers more music by men. The researchers simulated and modified the algorithm to elevate the rankings of women while lowering those of men, which created a new feedback loop. The algorithm recommended female artists earlier, increasing user awareness so the program would recommend female artists more often when that content was selected.
|
||||
604 | Swiss Robots Use UV Light to Zap Viruses Aboard Passenger Planes | ZURICH (Reuters) - A robot armed with virus-killing ultraviolet light is being tested on Swiss airplanes, yet another idea aiming to restore passenger confidence and spare the travel industry more pandemic pain.
UVeya, a Swiss start-up, is conducting the trials of the robots with Dubai-based airport services company Dnata inside Embraer jets from Helvetic Airways, a charter airline owned by Swiss billionaire Martin Ebner.
Aircraft makers still must certify the devices and are studying the impact their UV light may have on interior upholstery, which could fade after many disinfections, UVeya co-founder Jodoc Elmiger said.
Still, he's hopeful robot cleaners could reduce people's fear of flying, even as COVID-19 circulates.
"This is a proven technology, it's been used for over 50 years in hospitals and laboratories, it's very efficient," Elmiger said on Wednesday. "It doesn't leave any trace or residue."
Elmiger's team has built three prototypes so far, one of which he demonstrated inside a Helvetic jet at the Zurich Airport, where traffic plunged 75% last year.
The robot's lights, mounted on a crucifix-shaped frame, cast everything in a soft-blue glow as it slowly moved up the Embraer's aisle. One robot can disinfect a single-aisled plane in 13 minutes, start to finish, though larger planes take longer.
Dnata executives hope airplane makers will sign off on the robots -- Elmiger estimates they'll sell for 15,000 Swiss francs ($15,930) or so -- as governments require new measures to ensure air travellers don't get sick.
"We were looking for a sustainable, and also environmentally friendly solution, to cope with those requests," said Lukas Gyger, Dnata's chief operating officer in Switzerland.
While privately owned Helvetic has not needed bailouts like much of the industry, its business has also been gutted, with its fleet sitting largely silently in hangars. UVeya's UV robots may help change that, said Mehdi Guenin, a Helvetic spokesman.
"If our passengers, if our crew know our aircraft are safe -- that there are no viruses or bacteria -- it could help them to fly again," Guenin said.
($1 = 0.9418 Swiss francs) | A robot developed by Swiss startup UVeya is using ultraviolet (UV) light to kill viruses aboard Swiss passenger planes in a test being conducted with airport services company Dnata in the United Arab Emirates. The UVeya team has built three prototypes of the robot, one of which co-founder Jodoc Elmiger demonstrated inside a Helvetic Airways jet at Switzerland's Zurich Airport. One robot can disinfect a single-aisle plane in 13 minutes. Dnata's Lukas Gyger said, "We were looking for a sustainable, and also environmentally friendly, solution to cope with [requests to ensure air travelers do not get sick]." | [] | [] | [] | scitechnews | None | None | None | None | A robot developed by Swiss startup UVeya is using ultraviolet (UV) light to kill viruses aboard Swiss passenger planes in a test being conducted with airport services company Dnata in the United Arab Emirates. The UVeya team has built three prototypes of the robot, one of which co-founder Jodoc Elmiger demonstrated inside a Helvetic Airways jet at Switzerland's Zurich Airport. One robot can disinfect a single-aisle plane in 13 minutes. Dnata's Lukas Gyger said, "We were looking for a sustainable, and also environmentally friendly, solution to cope with [requests to ensure air travelers do not get sick]."
ZURICH (Reuters) - A robot armed with virus-killing ultraviolet light is being tested on Swiss airplanes, yet another idea aiming to restore passenger confidence and spare the travel industry more pandemic pain.
UVeya, a Swiss start-up, is conducting the trials of the robots with Dubai-based airport services company Dnata inside Embraer jets from Helvetic Airways, a charter airline owned by Swiss billionaire Martin Ebner.
Aircraft makers still must certify the devices and are studying the impact their UV light may have on interior upholstery, which could fade after many disinfections, UVeya co-founder Jodoc Elmiger said.
Still, he's hopeful robot cleaners could reduce people's fear of flying, even as COVID-19 circulates.
"This is a proven technology, it's been used for over 50 years in hospitals and laboratories, it's very efficient," Elmiger said on Wednesday. "It doesn't leave any trace or residue."
Elmiger's team has built three prototypes so far, one of which he demonstrated inside a Helvetic jet at the Zurich Airport, where traffic plunged 75% last year.
The robot's lights, mounted on a crucifix-shaped frame, cast everything in a soft-blue glow as it slowly moved up the Embraer's aisle. One robot can disinfect a single-aisled plane in 13 minutes, start to finish, though larger planes take longer.
Dnata executives hope airplane makers will sign off on the robots -- Elmiger estimates they'll sell for 15,000 Swiss francs ($15,930) or so -- as governments require new measures to ensure air travellers don't get sick.
"We were looking for a sustainable, and also environmentally friendly solution, to cope with those requests," said Lukas Gyger, Dnata's chief operating officer in Switzerland.
While privately owned Helvetic has not needed bailouts like much of the industry, its business has also been gutted, with its fleet sitting largely silently in hangars. UVeya's UV robots may help change that, said Mehdi Guenin, a Helvetic spokesman.
"If our passengers, if our crew know our aircraft are safe -- that there are no viruses or bacteria -- it could help them to fly again," Guenin said.
($1 = 0.9418 Swiss francs) |
|||
606 | Separating Fact From Fiction: UA Professors Study Online 'Pseudo-Reviews' | The popularity of purchasing goods and services through online retailers such as Amazon continues to increase, making it overwhelming for consumers to differentiate fact from fiction in online product and service reviews. Thanks to the latest research from professors at UA, consumers, as well as marketers, can better identify and understand the impact of exaggerated or phony online reviews, helping them to make more informed decisions. Dr. Federico de Gregorio Associate Professor Dr. Federico de Gregorio and Assistant Professor Dr. Alexa K. Fox in UA's Department of Marketing , along with Associate Professor Dr. Hye Jin Yoon in the University of Georgia's Department of Advertising and Public Relations, are the first to conceptualize and investigate the effects of a new type of online user-generated content called pseudo-reviews. The content of a pseudo-review often resembles authentic reviews on the surface, purporting to tell an story about product use. However, while authentic reviews often may include humor as a stylistic device to convey a genuine product evaluation, pseudo-reviews use humor typically to mock some product aspect. User pseudo-review on Amazon.com about a 105-inch, 4K, $120,000 Samsung TV: "I was able to purchase this amazing television with an FHA loan (30 year fixed-rate w/ 4.25% APR) and only 3.5% down. This is, hands down, the best decision I've ever made. And the box it came in is incredibly roomy too, which is a huge bonus, because I live in it now." User authentic review with humor on Yelp.com about a San Francisco-area restaurant: "How on God's green earth is this place still in business? Drunk college kids! Do not reward unethical business people and bad customer service by coming here. And the food will make you curse the day you were ever born more so than your hangover. You can seriously get better food dumpster diving." The researchers' paper, "Pseudo-reviews: Conceptualization and Consumer Effects of a New Online Phenomenon," was recently published in Computers in Human Behavior . Dr. Alexa Fox 'Pseudo-reviews' vs. authentic reviews The results of two studies suggest there are differences in terms of consumers' perceptions of and behaviors related to pseudo-reviews compared to authentic reviews. The researchers find that pseudo-reviews have little effect on consumers' attitudes about a product when presented individually. But when pseudo-reviews are presented together with authentic reviews, they negatively affect consumers' attitudes and purchase intentions if the number of pseudo-reviews matches the number of authentic reviews. The authors conclude that too many pseudo-reviews present on a platform could be detrimental, even resulting in consumers abandoning the platform. Given how difficult it can be to quickly and efficiently distinguish pseudo-reviews among authentic ones, it is critical that marketers, especially those of typical products that are perceived as relatable to the average consumer (such as something ordinary as ballpoint pens) watch for pseudo-reviews and understand their potential impact. "As consumers become increasingly exposed to pseudo-reviews, our research helps consumers understand how pseudo-reviews might influence their perception of a product they are considering," says de Gregorio. "The results show that pseudo-reviews are perceived as not very helpful or realistic. However, despite this, for a 'typical" product,' when consumers see the same number of pseudo-reviews as authentic reviews at the same time, product attitude is worse and purchase intention is lower. If there are more or fewer pseudo-reviews than authentic reviews on a page, pseudo-reviews do not seem to have an effect. For a product that is perceived to be atypical or unusual in some way, such as a banana slicer device, pseudo-reviews do not seem to have an effect." "It is important that consumers and managers alike understand this unique type of online review, especially given the growing sea of user-generated content that is available to today's consumers," adds Fox. "While pseudo-reviews may not appear problematic on the surface due to their humorous nature, indeed, they have the potential to be damaging to consumers' decision-making processes." Research highlights: Pseudo-reviews are different from humorous reviews (i.e., authentic reviews that genuinely convey an assessment of a product, but that happen to have some humor in them) and deceptive reviews (i.e., reviews created by a product's competitors or bots to simulate genuine reviews). While pseudo-reviews appear similar to authentic reviews, their main purpose is to mock some aspect of a product using humor. Pseudo-reviews are generally perceived by consumers as less helpful than authentic reviews. When presented in isolation, pseudo-reviews have little effect on product attitude. Presented among authentic reviews, pseudo-reviews have effects only when their number matches the number of authentic reviews. Presented among authentic reviews , product attitudes and purchase intentions are lower when the number of pseudo-reviews matches that of authentic reviews. Related: UA's Department of Marketing The College of Business Adminstration at UA Media contact: Alex Knisely , 330-972-6477 or [email protected] . | A study by researchers at the University of Akron (UA) and the University of Georgia looks at the impact of pseudo-reviews on online platforms found that pseudo-reviews appear like authentic reviews in telling a story about product use, but often use humor to mock aspects of the product. They found that pseudo-reviews on their own have little impact on consumers' attitudes about a product, but when the number of pseudo-reviews and authentic reviews is the same, consumers' attitudes and purchase intentions are negatively affected. The researchers also noted that consumers could abandon platforms that feature too many pseudo-reviews. UA's Alexa K. Fox said, "While pseudo-reviews may not appear problematic on the surface due to their humorous nature, indeed, they have the potential to be damaging to consumers' decision-making processes." | [] | [] | [] | scitechnews | None | None | None | None | A study by researchers at the University of Akron (UA) and the University of Georgia looks at the impact of pseudo-reviews on online platforms found that pseudo-reviews appear like authentic reviews in telling a story about product use, but often use humor to mock aspects of the product. They found that pseudo-reviews on their own have little impact on consumers' attitudes about a product, but when the number of pseudo-reviews and authentic reviews is the same, consumers' attitudes and purchase intentions are negatively affected. The researchers also noted that consumers could abandon platforms that feature too many pseudo-reviews. UA's Alexa K. Fox said, "While pseudo-reviews may not appear problematic on the surface due to their humorous nature, indeed, they have the potential to be damaging to consumers' decision-making processes."
The popularity of purchasing goods and services through online retailers such as Amazon continues to increase, making it overwhelming for consumers to differentiate fact from fiction in online product and service reviews. Thanks to the latest research from professors at UA, consumers, as well as marketers, can better identify and understand the impact of exaggerated or phony online reviews, helping them to make more informed decisions. Dr. Federico de Gregorio Associate Professor Dr. Federico de Gregorio and Assistant Professor Dr. Alexa K. Fox in UA's Department of Marketing , along with Associate Professor Dr. Hye Jin Yoon in the University of Georgia's Department of Advertising and Public Relations, are the first to conceptualize and investigate the effects of a new type of online user-generated content called pseudo-reviews. The content of a pseudo-review often resembles authentic reviews on the surface, purporting to tell an story about product use. However, while authentic reviews often may include humor as a stylistic device to convey a genuine product evaluation, pseudo-reviews use humor typically to mock some product aspect. User pseudo-review on Amazon.com about a 105-inch, 4K, $120,000 Samsung TV: "I was able to purchase this amazing television with an FHA loan (30 year fixed-rate w/ 4.25% APR) and only 3.5% down. This is, hands down, the best decision I've ever made. And the box it came in is incredibly roomy too, which is a huge bonus, because I live in it now." User authentic review with humor on Yelp.com about a San Francisco-area restaurant: "How on God's green earth is this place still in business? Drunk college kids! Do not reward unethical business people and bad customer service by coming here. And the food will make you curse the day you were ever born more so than your hangover. You can seriously get better food dumpster diving." The researchers' paper, "Pseudo-reviews: Conceptualization and Consumer Effects of a New Online Phenomenon," was recently published in Computers in Human Behavior . Dr. Alexa Fox 'Pseudo-reviews' vs. authentic reviews The results of two studies suggest there are differences in terms of consumers' perceptions of and behaviors related to pseudo-reviews compared to authentic reviews. The researchers find that pseudo-reviews have little effect on consumers' attitudes about a product when presented individually. But when pseudo-reviews are presented together with authentic reviews, they negatively affect consumers' attitudes and purchase intentions if the number of pseudo-reviews matches the number of authentic reviews. The authors conclude that too many pseudo-reviews present on a platform could be detrimental, even resulting in consumers abandoning the platform. Given how difficult it can be to quickly and efficiently distinguish pseudo-reviews among authentic ones, it is critical that marketers, especially those of typical products that are perceived as relatable to the average consumer (such as something ordinary as ballpoint pens) watch for pseudo-reviews and understand their potential impact. "As consumers become increasingly exposed to pseudo-reviews, our research helps consumers understand how pseudo-reviews might influence their perception of a product they are considering," says de Gregorio. "The results show that pseudo-reviews are perceived as not very helpful or realistic. However, despite this, for a 'typical" product,' when consumers see the same number of pseudo-reviews as authentic reviews at the same time, product attitude is worse and purchase intention is lower. If there are more or fewer pseudo-reviews than authentic reviews on a page, pseudo-reviews do not seem to have an effect. For a product that is perceived to be atypical or unusual in some way, such as a banana slicer device, pseudo-reviews do not seem to have an effect." "It is important that consumers and managers alike understand this unique type of online review, especially given the growing sea of user-generated content that is available to today's consumers," adds Fox. "While pseudo-reviews may not appear problematic on the surface due to their humorous nature, indeed, they have the potential to be damaging to consumers' decision-making processes." Research highlights: Pseudo-reviews are different from humorous reviews (i.e., authentic reviews that genuinely convey an assessment of a product, but that happen to have some humor in them) and deceptive reviews (i.e., reviews created by a product's competitors or bots to simulate genuine reviews). While pseudo-reviews appear similar to authentic reviews, their main purpose is to mock some aspect of a product using humor. Pseudo-reviews are generally perceived by consumers as less helpful than authentic reviews. When presented in isolation, pseudo-reviews have little effect on product attitude. Presented among authentic reviews, pseudo-reviews have effects only when their number matches the number of authentic reviews. Presented among authentic reviews , product attitudes and purchase intentions are lower when the number of pseudo-reviews matches that of authentic reviews. Related: UA's Department of Marketing The College of Business Adminstration at UA Media contact: Alex Knisely , 330-972-6477 or [email protected] . |
|||
607 | Quantum Technology Emerges From the Lab to Spark a Mini Start-Up Boom | The University of Chicago, the University of Illinois, and Argonne National Laboratory have rolled out the first program in the U.S. to support quantum-tech start-ups. The University of Chicago's David Awschalom said, "We are at the birth of a new field of technology. It's like we're at the point where the transistor is being invented. People are beginning to think about systems, software, applications." The Duality accelerator program, based at the University of Chicago's Booth School of Business, will invest $20 million over the next 10 years to assist as many as 10 quantum start-ups annually. The start-ups will benefit from $50,000 grants, access to lab and office space, and faculty mentoring. The University of Chicago's Fred Chong noted that "there is very little on the software side" of quantum computing, and the challenge for developers is to develop programs that work with today's "imperfect quantum machines." | [] | [] | [] | scitechnews | None | None | None | None | The University of Chicago, the University of Illinois, and Argonne National Laboratory have rolled out the first program in the U.S. to support quantum-tech start-ups. The University of Chicago's David Awschalom said, "We are at the birth of a new field of technology. It's like we're at the point where the transistor is being invented. People are beginning to think about systems, software, applications." The Duality accelerator program, based at the University of Chicago's Booth School of Business, will invest $20 million over the next 10 years to assist as many as 10 quantum start-ups annually. The start-ups will benefit from $50,000 grants, access to lab and office space, and faculty mentoring. The University of Chicago's Fred Chong noted that "there is very little on the software side" of quantum computing, and the challenge for developers is to develop programs that work with today's "imperfect quantum machines."
|
||||
608 | Human Brain Organoids Grown in Cheap 3D-Printed Bioreactor | By Christa Lesté-Lasserre
An organoid grown in a microfluidic bioreactor MIT and IIT Madras
It is now possible to grow and culture human brain tissue in a device that costs little more than a cup of coffee. With a $5 washable and reusable microchip, scientists can watch self-organising brain samples, known as brain organoids , growing in real time under a microscope.
The device, dubbed a "microfluidic bioreactor" , is a 4-by-6-centimetre chip that includes small wells in which the brain organoids grow. Each is filled with nutrient-rich fluid that is pumped in and out automatically, like the fluids that flush through the human brain.
Using this system, Ikram Khan at the Indian Institute of Technology Madras in Chennai and his colleagues at the Massachusetts Institute of Technology (MIT) have now reported the growth of a brain organoid over seven days. This demonstrates that the brain cells can thrive inside the chip, says Khan.
Culturing brain tissue in a laboratory would theoretically let scientists test how individual patients' brains might react to different kinds of medications.
Devices for growing brain organoids already exist, but because the dishes are sealed shut to avoid contamination from microorganisms in the air, it is impossible to add nutrients like amino acids, vitamins, salts and glucose or to remove the waste produced by the cells. As a consequence, the cells usually die within a few days.
To combat that problem, researchers have previously added tiny tubes to deliver nutrients to the brain tissue. But the opaque design of these devices makes it impossible to watch what is happening inside the dish - a significant problem, especially if scientists want to know how the tissue reacts to drugs.
So Khan and his colleagues engineered a new, simpler device that combines a growing platform, tiny tubes, drug-injection channels and even a fluid-warming compartment all onto a single chip, which can be 3D-printed using the same kind of biocompatible resin used in dental surgery. The bioreactors control the flow of replenishing fluid and waste extraction through tubes in an enclosed incubator while providing full visibility.
To test their system, the researchers placed human brain-differentiated stem cells in the wells and programmed fluid flow through the chip. Using a microscope above the platform, they could watch the brain tissue develop for a full week - essentially until the organoids ran out of space in their tiny wells.
During that time, they saw that the cells multiplied and formed a ventricle-like structure, similar to the cavities seen in real brains, says Chloé Delépine at MIT. The ventricle was surrounded by tissue that appeared similar to that of the neocortex, a brain layer responsible for higher-order functions like thinking, reasoning and language comprehension.
Human brain organoids have reached such a level of development in a laboratory before, but this marks the first time it has happened in a device that allows such good visibility of the tissue, and so inexpensively, says Delépine.
"My goal is to see this technology reach people throughout the world who need access to it for their healthcare needs," says Khan, who has since created a start-up company in India to realise this objective.
Journal reference: Biomicrofluidics , DOI: 10.1063/5.0041027 | A human brain organoid was cultured in a week in a three-dimensionally (3D) -printed microfluidic bioreactor developed by researchers at the Indian Institute of Technology Madras (IIT Madras) and the Massachusetts Institute of Technology. The ultra-cheap bioreactor consists of a $5 washable, reusable microchip containing wells where brain tissue grows. Nutrient fluids are automatically pumped through these channels, feeding the tissue. The chip can be 3D-printed using the same kind of biocompatible resin used in dental surgery, while the bioreactors control the flow of nutrient fluid and purge waste through tubes in an enclosed incubator. IIT Madras' Ikram Khan said, "My goal is to see this technology reach people throughout the world who need access to it for their healthcare needs." | [] | [] | [] | scitechnews | None | None | None | None | A human brain organoid was cultured in a week in a three-dimensionally (3D) -printed microfluidic bioreactor developed by researchers at the Indian Institute of Technology Madras (IIT Madras) and the Massachusetts Institute of Technology. The ultra-cheap bioreactor consists of a $5 washable, reusable microchip containing wells where brain tissue grows. Nutrient fluids are automatically pumped through these channels, feeding the tissue. The chip can be 3D-printed using the same kind of biocompatible resin used in dental surgery, while the bioreactors control the flow of nutrient fluid and purge waste through tubes in an enclosed incubator. IIT Madras' Ikram Khan said, "My goal is to see this technology reach people throughout the world who need access to it for their healthcare needs."
By Christa Lesté-Lasserre
An organoid grown in a microfluidic bioreactor MIT and IIT Madras
It is now possible to grow and culture human brain tissue in a device that costs little more than a cup of coffee. With a $5 washable and reusable microchip, scientists can watch self-organising brain samples, known as brain organoids , growing in real time under a microscope.
The device, dubbed a "microfluidic bioreactor" , is a 4-by-6-centimetre chip that includes small wells in which the brain organoids grow. Each is filled with nutrient-rich fluid that is pumped in and out automatically, like the fluids that flush through the human brain.
Using this system, Ikram Khan at the Indian Institute of Technology Madras in Chennai and his colleagues at the Massachusetts Institute of Technology (MIT) have now reported the growth of a brain organoid over seven days. This demonstrates that the brain cells can thrive inside the chip, says Khan.
Culturing brain tissue in a laboratory would theoretically let scientists test how individual patients' brains might react to different kinds of medications.
Devices for growing brain organoids already exist, but because the dishes are sealed shut to avoid contamination from microorganisms in the air, it is impossible to add nutrients like amino acids, vitamins, salts and glucose or to remove the waste produced by the cells. As a consequence, the cells usually die within a few days.
To combat that problem, researchers have previously added tiny tubes to deliver nutrients to the brain tissue. But the opaque design of these devices makes it impossible to watch what is happening inside the dish - a significant problem, especially if scientists want to know how the tissue reacts to drugs.
So Khan and his colleagues engineered a new, simpler device that combines a growing platform, tiny tubes, drug-injection channels and even a fluid-warming compartment all onto a single chip, which can be 3D-printed using the same kind of biocompatible resin used in dental surgery. The bioreactors control the flow of replenishing fluid and waste extraction through tubes in an enclosed incubator while providing full visibility.
To test their system, the researchers placed human brain-differentiated stem cells in the wells and programmed fluid flow through the chip. Using a microscope above the platform, they could watch the brain tissue develop for a full week - essentially until the organoids ran out of space in their tiny wells.
During that time, they saw that the cells multiplied and formed a ventricle-like structure, similar to the cavities seen in real brains, says Chloé Delépine at MIT. The ventricle was surrounded by tissue that appeared similar to that of the neocortex, a brain layer responsible for higher-order functions like thinking, reasoning and language comprehension.
Human brain organoids have reached such a level of development in a laboratory before, but this marks the first time it has happened in a device that allows such good visibility of the tissue, and so inexpensively, says Delépine.
"My goal is to see this technology reach people throughout the world who need access to it for their healthcare needs," says Khan, who has since created a start-up company in India to realise this objective.
Journal reference: Biomicrofluidics , DOI: 10.1063/5.0041027 |
|||
609 | ML Tool Converts 2D Material Images Into 3D Structures | A new algorithm developed at Imperial College London can convert 2D images of composite materials into 3D structures.
The machine learning algorithm could help materials scientists and manufacturers to study and improve the design and production of composite materials like battery electrodes and aircraft parts in 3D.
Using data from 2D cross-sections of composite materials, which are made by combining different materials with distinct physical and chemical properties, the algorithm can expand the dimensions of cross-sections to convert them into 3D computerised models. This allows scientists to study the different materials, or 'phases', of a composite and how they fit together.
The tool learns what 2D cross-sections of composites look like and scales them up so their phases can be studied in a 3D space. It could in future be used to optimise the designs of these types of materials by allowing scientists and manufacturers to study the layered architecture of the composites.
The researchers found their technique to be cheaper and faster than creating 3D computer representations from physical 3D objects. It was also able to more clearly identify different phases within the materials, which is more difficult to do using current techniques.
The findings are published in Nature Machine Intelligence .
Lead author of the paper Steve Kench , PhD student in the Tools for Learning, Design and Research (TLDR) group at Imperial's Dyson School of Design Engineering , said: "Combining materials as composites allows you to take advantages of the best properties of each component, but studying them in detail can be challenging as the arrangement of the materials strongly affects the performance. Our algorithm allows researchers to take their 2D image data and generate 3D structures with all the same properties, which allows them to perform more realistic simulations."
Studying, designing, and manufacturing composite materials in three dimensions is currently challenging. 2D images are cheap to obtain and give researchers high resolution, wide fields of view, and are very good at telling the different materials apart. On the other hand, 3D imaging techniques are often expensive and comparatively blurry. Their low resolution also makes it difficult to identify different phases within a composite.
For example, researchers are currently unable to identify materials within battery electrodes, which consist of ceramic material, carbon polymetric binders, and pores for the liquid phase, using 3D imaging techniques.
In this study, the researchers used a new machine learning technique called 'deep convolutional generative adversarial networks' (DC-GANs) which was invented in 2014.
This approach, where two neural networks are made to compete against each other, is at the heart of the tool for converting 2D to 3D. One neural network is shown the 2D images and learns to recognise them, while the other tries to make "fake" 3D versions. If the first network looks at all the 2D slices in the "fake" 3D version and thinks they're "real," then the versions can be used for simulating any materials property of interest.
The same approach also allows researchers to run simulations using different materials and compositions much faster than was previously possible, which will accelerate the search for better composites.
Co-author Dr Sam Cooper , who leads the TLDR group at the Dyson School of Design Engineering, said: "The performance of many devices that contain composite materials, such as batteries, is closely tied to the 3D arrangement of their components at the microscale. However, 3D imaging these materials in enough detail can be painstaking. We hope that our new machine learning tool will empower the materials design community by getting rid of the dependence on expensive 3D imaging machines in many scenarios."
This project was funded by EPSRC Faraday Institution Multi-Scale Modelling project.
"Generating three-dimensional structures from a two-dimensional slice with generative adversarial network-based dimensionality expansion" by Steve Kench and Samuel J. Cooper, published 5 April 2021 in Nature Machine Intelligence .
Images: Steve Kench, Imperial College London | A new machine learning algorithm developed by researchers at the U.K.'s Imperial College London (ICL) can render two-dimensional (2D) images of composite materials into three-dimensional (3D) structures. ICL's Steve Kench said, "Our algorithm allows researchers to take their 2D image data and generate 3D structures with all the same properties, which allows them to perform more realistic simulations." The tool uses deep convolutional generative adversarial networks to learn the appearance of 2D composite cross-sections, and expands them so their "phases" (the different components of the composite material) can be studied in 3D space. The researchers found this method to be less expensive and faster than generating 3D computer representations from physical 3D objects, and able to identify different phases more clearly. ICL's Sam Cooper said, "We hope that our new machine learning tool will empower the materials design community by getting rid of the dependence on expensive 3D imaging machines in many scenarios." | [] | [] | [] | scitechnews | None | None | None | None | A new machine learning algorithm developed by researchers at the U.K.'s Imperial College London (ICL) can render two-dimensional (2D) images of composite materials into three-dimensional (3D) structures. ICL's Steve Kench said, "Our algorithm allows researchers to take their 2D image data and generate 3D structures with all the same properties, which allows them to perform more realistic simulations." The tool uses deep convolutional generative adversarial networks to learn the appearance of 2D composite cross-sections, and expands them so their "phases" (the different components of the composite material) can be studied in 3D space. The researchers found this method to be less expensive and faster than generating 3D computer representations from physical 3D objects, and able to identify different phases more clearly. ICL's Sam Cooper said, "We hope that our new machine learning tool will empower the materials design community by getting rid of the dependence on expensive 3D imaging machines in many scenarios."
A new algorithm developed at Imperial College London can convert 2D images of composite materials into 3D structures.
The machine learning algorithm could help materials scientists and manufacturers to study and improve the design and production of composite materials like battery electrodes and aircraft parts in 3D.
Using data from 2D cross-sections of composite materials, which are made by combining different materials with distinct physical and chemical properties, the algorithm can expand the dimensions of cross-sections to convert them into 3D computerised models. This allows scientists to study the different materials, or 'phases', of a composite and how they fit together.
The tool learns what 2D cross-sections of composites look like and scales them up so their phases can be studied in a 3D space. It could in future be used to optimise the designs of these types of materials by allowing scientists and manufacturers to study the layered architecture of the composites.
The researchers found their technique to be cheaper and faster than creating 3D computer representations from physical 3D objects. It was also able to more clearly identify different phases within the materials, which is more difficult to do using current techniques.
The findings are published in Nature Machine Intelligence .
Lead author of the paper Steve Kench , PhD student in the Tools for Learning, Design and Research (TLDR) group at Imperial's Dyson School of Design Engineering , said: "Combining materials as composites allows you to take advantages of the best properties of each component, but studying them in detail can be challenging as the arrangement of the materials strongly affects the performance. Our algorithm allows researchers to take their 2D image data and generate 3D structures with all the same properties, which allows them to perform more realistic simulations."
Studying, designing, and manufacturing composite materials in three dimensions is currently challenging. 2D images are cheap to obtain and give researchers high resolution, wide fields of view, and are very good at telling the different materials apart. On the other hand, 3D imaging techniques are often expensive and comparatively blurry. Their low resolution also makes it difficult to identify different phases within a composite.
For example, researchers are currently unable to identify materials within battery electrodes, which consist of ceramic material, carbon polymetric binders, and pores for the liquid phase, using 3D imaging techniques.
In this study, the researchers used a new machine learning technique called 'deep convolutional generative adversarial networks' (DC-GANs) which was invented in 2014.
This approach, where two neural networks are made to compete against each other, is at the heart of the tool for converting 2D to 3D. One neural network is shown the 2D images and learns to recognise them, while the other tries to make "fake" 3D versions. If the first network looks at all the 2D slices in the "fake" 3D version and thinks they're "real," then the versions can be used for simulating any materials property of interest.
The same approach also allows researchers to run simulations using different materials and compositions much faster than was previously possible, which will accelerate the search for better composites.
Co-author Dr Sam Cooper , who leads the TLDR group at the Dyson School of Design Engineering, said: "The performance of many devices that contain composite materials, such as batteries, is closely tied to the 3D arrangement of their components at the microscale. However, 3D imaging these materials in enough detail can be painstaking. We hope that our new machine learning tool will empower the materials design community by getting rid of the dependence on expensive 3D imaging machines in many scenarios."
This project was funded by EPSRC Faraday Institution Multi-Scale Modelling project.
"Generating three-dimensional structures from a two-dimensional slice with generative adversarial network-based dimensionality expansion" by Steve Kench and Samuel J. Cooper, published 5 April 2021 in Nature Machine Intelligence .
Images: Steve Kench, Imperial College London |
|||
610 | Scientists Create Online Games to Show Risks of AI Emotion Recognition | It is a technology that has been frowned upon by ethicists: now researchers are hoping to unmask the reality of emotion recognition systems in an effort to boost public debate.
Technology designed to identify human emotions using machine learning algorithms is a huge industry , with claims it could prove valuable in myriad situations, from road safety to market research. But critics say the technology not only raises privacy concerns, but is inaccurate and racially biased .
A team of researchers have created a website - emojify.info - where the public can try out emotion recognition systems through their own computer cameras. One game focuses on pulling faces to trick the technology, while another explores how such systems can struggle to read facial expressions in context.
Their hope, the researchers say, is to raise awareness of the technology and promote conversations about its use.
"It is a form of facial recognition, but it goes farther because rather than just identifying people, it claims to read our emotions, our inner feelings from our faces," said Dr Alexa Hagerty, project lead and researcher at the University of Cambridge Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk.
Facial recognition technology, often used to identify people, has come under intense scrutiny in recent years . Last year the Equality and Human Rights Commission said its use for mass screening should be halted , saying it could increase police discrimination and harm freedom of expression.
But Hagerty said many people were not aware how common emotion recognition systems were, noting they were employed in situations ranging from job hiring, to customer insight work, airport security, and even education to see if students are engaged or doing their homework.
Such technology, she said, was in use all over the world, from Europe to the US and China. Taigusys, a company that specialises in emotion recognition systems and whose main office is in Shenzhen, says it has used them in settings ranging from care homes to prisons , while according to reports earlier this year , the Indian city of Lucknow is planning to use the technology to spot distress in women as a result of harassment - a move that has met with criticism, including from digital rights organisations.
While Hagerty said emotion recognition technology might have some potential benefits these must be weighed against concerns around accuracy, racial bias, as well as whether the technology was even the right tool for a particular job.
"We need to be having a much wider public conversation and deliberation about these technologies," she said.
The new project allows users to try out emotion recognition technology. The site notes that "no personal data is collected and all images are stored on your device." In one game, users are invited to pull a series of faces to fake emotions and see if the system is fooled.
"The claim of the people who are developing this technology is that it is reading emotion," said Hagerty. But, she added, in reality the system was reading facial movement and then combining that with the assumption that those movements are linked to emotions - for example a smile means someone is happy.
"There is lots of really solid science that says that is too simple; it doesn't work quite like that," said Hagerty, adding that even just human experience showed it was possible to fake a smile. "That is what that game was: to show you didn't change your inner state of feeling rapidly six times, you just changed the way you looked [on your] face," she said.
Some emotion recognition researchers say they are aware of such limitations . But Hagerty said the hope was that the new project, which is funded by Nesta (National Endowment for Science, Technology and the Arts), will raise awareness of the technology and promote discussion around its use.
"I think we are beginning to realise we are not really 'users' of technology, we are citizens in world being deeply shaped by technology, so we need to have the same kind of democratic, citizen-based input on these technologies as we have on other important things in societies," she said.
Vidushi Marda, senior programme officer at the human rights organisation Article 19 said it was crucial to press "pause" on the growing market for emotion recognition systems.
"The use of emotion recognition technologies is deeply concerning as not only are these systems based on discriminatory and discredited science, their use is also fundamentally inconsistent with human rights," she said. "An important learning from the trajectory of facial recognition systems across the world has been to question the validity and need for technologies early and often - and projects that emphasise on the limitations and dangers of emotion recognition are an important step in that direction." | Scientists at the U.K.'s University of Cambridge have created emojify.info, a website where the public can test emotion recognition systems via online games, using their own computer cameras. One game has players make faces to fake emotions in an attempt to fool the systems; another challenges the technology to interpret facial expressions contextually. Cambridge's Alexa Hagerty cited a lack of public awareness of how widespread the technology is, adding that its potential benefits should be weighed against concerns about accuracy, racial bias, and suitability. Hagerty said although the technology's developers claim these systems can read emotions, in reality they read facial movements and combine them with existing assumptions that these movements embody emotions (as in, a smile means one is happy). The researchers said their goal is to raise awareness of the technology and to encourage dialogue about its use. | [] | [] | [] | scitechnews | None | None | None | None | Scientists at the U.K.'s University of Cambridge have created emojify.info, a website where the public can test emotion recognition systems via online games, using their own computer cameras. One game has players make faces to fake emotions in an attempt to fool the systems; another challenges the technology to interpret facial expressions contextually. Cambridge's Alexa Hagerty cited a lack of public awareness of how widespread the technology is, adding that its potential benefits should be weighed against concerns about accuracy, racial bias, and suitability. Hagerty said although the technology's developers claim these systems can read emotions, in reality they read facial movements and combine them with existing assumptions that these movements embody emotions (as in, a smile means one is happy). The researchers said their goal is to raise awareness of the technology and to encourage dialogue about its use.
It is a technology that has been frowned upon by ethicists: now researchers are hoping to unmask the reality of emotion recognition systems in an effort to boost public debate.
Technology designed to identify human emotions using machine learning algorithms is a huge industry , with claims it could prove valuable in myriad situations, from road safety to market research. But critics say the technology not only raises privacy concerns, but is inaccurate and racially biased .
A team of researchers have created a website - emojify.info - where the public can try out emotion recognition systems through their own computer cameras. One game focuses on pulling faces to trick the technology, while another explores how such systems can struggle to read facial expressions in context.
Their hope, the researchers say, is to raise awareness of the technology and promote conversations about its use.
"It is a form of facial recognition, but it goes farther because rather than just identifying people, it claims to read our emotions, our inner feelings from our faces," said Dr Alexa Hagerty, project lead and researcher at the University of Cambridge Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk.
Facial recognition technology, often used to identify people, has come under intense scrutiny in recent years . Last year the Equality and Human Rights Commission said its use for mass screening should be halted , saying it could increase police discrimination and harm freedom of expression.
But Hagerty said many people were not aware how common emotion recognition systems were, noting they were employed in situations ranging from job hiring, to customer insight work, airport security, and even education to see if students are engaged or doing their homework.
Such technology, she said, was in use all over the world, from Europe to the US and China. Taigusys, a company that specialises in emotion recognition systems and whose main office is in Shenzhen, says it has used them in settings ranging from care homes to prisons , while according to reports earlier this year , the Indian city of Lucknow is planning to use the technology to spot distress in women as a result of harassment - a move that has met with criticism, including from digital rights organisations.
While Hagerty said emotion recognition technology might have some potential benefits these must be weighed against concerns around accuracy, racial bias, as well as whether the technology was even the right tool for a particular job.
"We need to be having a much wider public conversation and deliberation about these technologies," she said.
The new project allows users to try out emotion recognition technology. The site notes that "no personal data is collected and all images are stored on your device." In one game, users are invited to pull a series of faces to fake emotions and see if the system is fooled.
"The claim of the people who are developing this technology is that it is reading emotion," said Hagerty. But, she added, in reality the system was reading facial movement and then combining that with the assumption that those movements are linked to emotions - for example a smile means someone is happy.
"There is lots of really solid science that says that is too simple; it doesn't work quite like that," said Hagerty, adding that even just human experience showed it was possible to fake a smile. "That is what that game was: to show you didn't change your inner state of feeling rapidly six times, you just changed the way you looked [on your] face," she said.
Some emotion recognition researchers say they are aware of such limitations . But Hagerty said the hope was that the new project, which is funded by Nesta (National Endowment for Science, Technology and the Arts), will raise awareness of the technology and promote discussion around its use.
"I think we are beginning to realise we are not really 'users' of technology, we are citizens in world being deeply shaped by technology, so we need to have the same kind of democratic, citizen-based input on these technologies as we have on other important things in societies," she said.
Vidushi Marda, senior programme officer at the human rights organisation Article 19 said it was crucial to press "pause" on the growing market for emotion recognition systems.
"The use of emotion recognition technologies is deeply concerning as not only are these systems based on discriminatory and discredited science, their use is also fundamentally inconsistent with human rights," she said. "An important learning from the trajectory of facial recognition systems across the world has been to question the validity and need for technologies early and often - and projects that emphasise on the limitations and dangers of emotion recognition are an important step in that direction." |
|||
611 | China Creates Its Own Digital Currency | China's digital yuan cryptocurrency is expected to give its government a vast economic and social monitoring tool, and strip users of their anonymity. Beijing is preparing the digital currency for international use, and designing it to be unconnected to the global financial system, to permit more centralized control. The cryptocurrency is accessible from the owner's cellphone or on a card, and it may be spent without an online connection. Analysts and economists say the digital yuan could gain a foothold on the fringes of the international financial system, allowing people in impoverished nations to transfer money internationally. With a trackable digital currency, China's government could impose and collect fines as soon as an infraction is detected, or enable parties sanctioned by the U.S. to exchange money outside of sanctions. | [] | [] | [] | scitechnews | None | None | None | None | China's digital yuan cryptocurrency is expected to give its government a vast economic and social monitoring tool, and strip users of their anonymity. Beijing is preparing the digital currency for international use, and designing it to be unconnected to the global financial system, to permit more centralized control. The cryptocurrency is accessible from the owner's cellphone or on a card, and it may be spent without an online connection. Analysts and economists say the digital yuan could gain a foothold on the fringes of the international financial system, allowing people in impoverished nations to transfer money internationally. With a trackable digital currency, China's government could impose and collect fines as soon as an infraction is detected, or enable parties sanctioned by the U.S. to exchange money outside of sanctions.
|
||||
612 | Why the Supreme Court's Ruling for Google Over Oracle Is a Win for Innovation | On Monday, the U.S. Supreme Court ended a decade-long legal battle in ruling that Google did not violate Oracle's copyrights associated with the Java programming language. The ruling largely maintains the use of application programming interfaces (APIs), which enable one company's hardware or software to interact with those from another. Microsoft, IBM's Red Hat, and Mozilla were among the technology companies that filed briefs contending new software development could be hampered if Oracle's demands were upheld. The Center for Democracy and Technology's Stan Adams said, "This decision is a huge win for developers and consumers. When software is interoperable - meaning it can talk to other software programs - it is easier to innovate and build new services." | [] | [] | [] | scitechnews | None | None | None | None | On Monday, the U.S. Supreme Court ended a decade-long legal battle in ruling that Google did not violate Oracle's copyrights associated with the Java programming language. The ruling largely maintains the use of application programming interfaces (APIs), which enable one company's hardware or software to interact with those from another. Microsoft, IBM's Red Hat, and Mozilla were among the technology companies that filed briefs contending new software development could be hampered if Oracle's demands were upheld. The Center for Democracy and Technology's Stan Adams said, "This decision is a huge win for developers and consumers. When software is interoperable - meaning it can talk to other software programs - it is easier to innovate and build new services."
|
||||
613 | AI Tool Can Help Detect Melanoma | Melanoma is a type of malignant tumor responsible for more than 70 percent of all skin cancer-related deaths worldwide. For years, physicians have relied on visual inspection to identify suspicious pigmented lesions (SPLs), which can be an indication of skin cancer. Such early-stage identification of SPLs in primary care settings can improve melanoma prognosis and significantly reduce treatment cost.
The challenge is that quickly finding and prioritizing SPLs is difficult, due to the high volume of pigmented lesions that often need to be evaluated for potential biopsies. Now, researchers from MIT and elsewhere have devised a new artificial intelligence pipeline, using deep convolutional neural networks (DCNNs) and applying them to analyzing SPLs through the use of wide-field photography common in most smartphones and personal cameras. | Researchers at the Massachusetts Institute of Technology (MIT) have designed an artificial intelligence system that analyzes wide-field images of patients' skin in order to detect melanoma more efficiently. The process applies deep convolutional neural networks (DCNNs) to optimize the identification and classification of suspicious pigmented lesions (SPLs) in wide-field images. The MIT researchers trained the system on 20,388 wide-field images from 133 patients at Spain's Hospital Gregorio Marañón, and on publicly available images. Dermatologists visually classified lesions in the images for comparison, and the system achieved more than 90.3% sensitivity in differentiating SPLs from nonsuspicious lesions, skin, and complex backgrounds. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the Massachusetts Institute of Technology (MIT) have designed an artificial intelligence system that analyzes wide-field images of patients' skin in order to detect melanoma more efficiently. The process applies deep convolutional neural networks (DCNNs) to optimize the identification and classification of suspicious pigmented lesions (SPLs) in wide-field images. The MIT researchers trained the system on 20,388 wide-field images from 133 patients at Spain's Hospital Gregorio Marañón, and on publicly available images. Dermatologists visually classified lesions in the images for comparison, and the system achieved more than 90.3% sensitivity in differentiating SPLs from nonsuspicious lesions, skin, and complex backgrounds.
Melanoma is a type of malignant tumor responsible for more than 70 percent of all skin cancer-related deaths worldwide. For years, physicians have relied on visual inspection to identify suspicious pigmented lesions (SPLs), which can be an indication of skin cancer. Such early-stage identification of SPLs in primary care settings can improve melanoma prognosis and significantly reduce treatment cost.
The challenge is that quickly finding and prioritizing SPLs is difficult, due to the high volume of pigmented lesions that often need to be evaluated for potential biopsies. Now, researchers from MIT and elsewhere have devised a new artificial intelligence pipeline, using deep convolutional neural networks (DCNNs) and applying them to analyzing SPLs through the use of wide-field photography common in most smartphones and personal cameras. |
|||
614 | Old Programming Language Suddenly Getting More Popular Again | The latest edition of the Tiobe Programming Community index saw Objective-C fall off the list of the 20 most popular programming languages, while Fortran has risen from 34th place to 20th in the past year. Tiobe hypothesized that Objective-C maintained its popularity partly because the adoption of Swift decelerated as mobile application developers focused on languages that could be used for building apps on multiple platforms. Fortran, released by IBM in the 1950s, remains a popular language in scientific computing circles. Tiobe said, "Fortran was the first commercial programming language ever, and is gaining popularity thanks to the massive need for [scientific] number crunching." | [] | [] | [] | scitechnews | None | None | None | None | The latest edition of the Tiobe Programming Community index saw Objective-C fall off the list of the 20 most popular programming languages, while Fortran has risen from 34th place to 20th in the past year. Tiobe hypothesized that Objective-C maintained its popularity partly because the adoption of Swift decelerated as mobile application developers focused on languages that could be used for building apps on multiple platforms. Fortran, released by IBM in the 1950s, remains a popular language in scientific computing circles. Tiobe said, "Fortran was the first commercial programming language ever, and is gaining popularity thanks to the massive need for [scientific] number crunching."
|
||||
615 | ML Approach Speeds Up Search for Molecular Conformers | Researchers at Finland's Aalto University developed a molecular conformer search procedure that integrates an active learning Bayesian optimization algorithm with quantum chemistry techniques to accelerate the process. Searching for molecular conformers previously required the relaxation of thousands of structures, entailing a significant commitment of time and computational resources even when applied to small molecules. The Aalto team's algorithm samples the structures with low energies or high energy uncertainties, to minimize the required data points. The researchers tested the machine learning procedure on four amino acids, and found low-energy conformers in good correspondence with experimental measurements and reference calculations while using less than 10% of the computational cost of the current fastest method. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Finland's Aalto University developed a molecular conformer search procedure that integrates an active learning Bayesian optimization algorithm with quantum chemistry techniques to accelerate the process. Searching for molecular conformers previously required the relaxation of thousands of structures, entailing a significant commitment of time and computational resources even when applied to small molecules. The Aalto team's algorithm samples the structures with low energies or high energy uncertainties, to minimize the required data points. The researchers tested the machine learning procedure on four amino acids, and found low-energy conformers in good correspondence with experimental measurements and reference calculations while using less than 10% of the computational cost of the current fastest method.
|
||||
616 | Deep Learning Networks Prefer the Human Voice - Just Like Us | New York, NY - April 6, 2021 - The digital revolution is built on a foundation of invisible 1s and 0s called bits. As decades pass, and more and more of the world's information and knowledge morph into streams of 1s and 0s, the notion that computers prefer to "speak" in binary numbers is rarely questioned. According to new research from Columbia Engineering, this could be about to change.
A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose "training labels" consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.
"To understand why this finding is significant," said Lipson, James and Sally Scapa Professor of Innovation and a member of Columbia's Data Science Institute , "It's useful to understand how neural networks are usually programmed, and why using the sound of the human voice is a radical experiment."
When used to convey information, the language of binary numbers is compact and precise. In contrast, spoken human language is more tonal and analog, and, when captured in a digital file, non-binary. Because numbers are such an efficient way to digitize data, programmers rarely deviate from a numbers-driven process when they develop a neural network.
Lipson, a highly regarded roboticist, and Chen, a former concert pianist, had a hunch that neural networks might not be reaching their full potential. They speculated that neural networks might learn faster and better if the systems were "trained" to recognize animals, for instance, by using the power of one of the world's most highly evolved sounds - the human voice uttering specific words.
One of the more common exercises AI researchers use to test out the merits of a new machine learning technique is to train a neural network to recognize specific objects and animals in a collection of different photographs. To check their hypothesis, Chen, Lipson and two students, Yu Li and Sunand Raghupathi, set up a controlled experiment. They created two new neural networks with the goal of training both of them to recognize 10 different types of objects in a collection of 50,000 photographs known as "training images." | Columbia University's Hod Lipson and Boyuan Chen demonstrated that artificial intelligence systems programmed with sound files of human language can outperform those coded with numerical data labels. The engineers created two neural networks and trained them to recognize 10 different types of objects in a set of 50,000 photos. One system was trained with binary inputs, while the other was fed a data table containing photos of animal or objects with corresponding audio files of a human voice speaking the names of those animals or objects. The Columbia researchers found that when presented with an image, the binary-programmed network answered with 1s and 0s, while the other network vocalized the name of the imaged object. When tested with ambiguous images, the voice-trained network was found to be 50% accurate, while the numerically trained network was only 20% accurate. | [] | [] | [] | scitechnews | None | None | None | None | Columbia University's Hod Lipson and Boyuan Chen demonstrated that artificial intelligence systems programmed with sound files of human language can outperform those coded with numerical data labels. The engineers created two neural networks and trained them to recognize 10 different types of objects in a set of 50,000 photos. One system was trained with binary inputs, while the other was fed a data table containing photos of animal or objects with corresponding audio files of a human voice speaking the names of those animals or objects. The Columbia researchers found that when presented with an image, the binary-programmed network answered with 1s and 0s, while the other network vocalized the name of the imaged object. When tested with ambiguous images, the voice-trained network was found to be 50% accurate, while the numerically trained network was only 20% accurate.
New York, NY - April 6, 2021 - The digital revolution is built on a foundation of invisible 1s and 0s called bits. As decades pass, and more and more of the world's information and knowledge morph into streams of 1s and 0s, the notion that computers prefer to "speak" in binary numbers is rarely questioned. According to new research from Columbia Engineering, this could be about to change.
A new study from Mechanical Engineering Professor Hod Lipson and his PhD student Boyuan Chen proves that artificial intelligence systems might actually reach higher levels of performance if they are programmed with sound files of human language rather than with numerical data labels. The researchers discovered that in a side-by-side comparison, a neural network whose "training labels" consisted of sound files reached higher levels of performance in identifying objects in images, compared to another network that had been programmed in a more traditional manner, using simple binary inputs.
"To understand why this finding is significant," said Lipson, James and Sally Scapa Professor of Innovation and a member of Columbia's Data Science Institute , "It's useful to understand how neural networks are usually programmed, and why using the sound of the human voice is a radical experiment."
When used to convey information, the language of binary numbers is compact and precise. In contrast, spoken human language is more tonal and analog, and, when captured in a digital file, non-binary. Because numbers are such an efficient way to digitize data, programmers rarely deviate from a numbers-driven process when they develop a neural network.
Lipson, a highly regarded roboticist, and Chen, a former concert pianist, had a hunch that neural networks might not be reaching their full potential. They speculated that neural networks might learn faster and better if the systems were "trained" to recognize animals, for instance, by using the power of one of the world's most highly evolved sounds - the human voice uttering specific words.
One of the more common exercises AI researchers use to test out the merits of a new machine learning technique is to train a neural network to recognize specific objects and animals in a collection of different photographs. To check their hypothesis, Chen, Lipson and two students, Yu Li and Sunand Raghupathi, set up a controlled experiment. They created two new neural networks with the goal of training both of them to recognize 10 different types of objects in a collection of 50,000 photographs known as "training images." |
|||
620 | How a Moving Platform for 3D Printing Can Cut Waste, Costs | 3-D printing has the potential to revolutionize product design and manufacturing in a vast range of fields - from custom components for consumer products, to 3-D printed dental products and bone and medical implants that could save lives. However, the process also creates a large amount of expensive and unsustainable waste and takes a long time, making it difficult for 3-D printing to be implemented on a wide scale.
Each time a 3-D printer produces custom objects, especially unusually-shaped products, it also needs to print supports-printed stands that balance the object as the printer creates layer by layer, helping maintain its shape integrity. However, these supports must be manually removed after printing, which requires finishing by hand and can result in shape inaccuracies or surface roughness. The materials the supports are made from often cannot be re-used, and so they're discarded, contributing to the growing problem of 3-D printed waste material.
For the first time, researchers in USC Viterbi's Daniel J. Epstein Department of Industrial and Systems Engineering have created a low-cost reusable support method to reduce the need for 3-D printers to print these wasteful supports, vastly improving cost-effectiveness and sustainability for 3-D printing.
The work, led by Yong Chen, professor of industrial and systems engineering and PhD student Yang Xu, has been published in Additive Manufacturing .
Traditional 3-D printing using the Fused Deposition Modeling (FDM) technique, prints layer-by-layer, directly onto a static metal surface. The new prototype instead uses a programmable, dynamically-controlled surface made of moveable metal pins to replace the printed supports. The pins rise up as the printer progressively builds the product. Chen said that testing of the new prototype has shown it saves around 35% in materials used to print objects.
"I work with biomedical doctors who 3-D print using biomaterials to build tissue or organs," Chen said. "A lot of the material they use are very expensive-we're talking small bottles that cost between $500 to $1000 each."
"For standard FDM printers, the materials cost is something like $50 per kilogram, but for bioprinting, it's more like $50 per gram. So if we can save 30% on material that would have gone into printing these supports, that is a huge cost saving for 3-D printing for biomedical purposes," Chen said.
In addition to the environmental and cost impacts of material wastage, traditional 3-D printing processes using supports is also time-consuming, Chen said.
"When you're 3-D printing complex shapes, half of the time you are building the parts that you need, the other half of the time you're building the supports. So with this system, we're not building the supports. Therefore, in terms of printing time, we have a savings of about 40%."
Chen said that similar prototypes developed in the past relied on individual motors to raise each of the mechanical supports, resulting in highly energy-intensive products that were also much more expensive to purchase, and thus not cost-effective for 3-D printers.
"So if you had 100 moving pins and the cost of every motor is around $10, the whole thing is $1,000, in addition to 25 control boards to control 100 different motors. The whole thing would cost well over $10,000."
The research team's new prototype works by running each of its individual supports from a single motor that moves a platform. The platform raises groups of metal pins at the same time, making it a cost-effective solution. Based on the product design, the program's software would tell the user where they need to add a series of metal tubes into the base of the platform. The position of these tubes would then determine which pins would raise to defined heights to best support the 3-D printed product, while also creating the least amount of wastage from printed supports. At the end of the process, the pins can be easily removed without damaging the product.
Chen said the system could also be easily adapted for large scale manufacturing, such as in the automotive, aerospace and yacht industries.
"People are already building FDM printers for large size car and ship bodies, as well as for consumer products such as furniture. As you can imagine, their building times are really long - we're talking about a whole day," Chen said. "So if you can save half of that, your manufacturing time could be reduced to half a day. Using our approach could bring a lot of benefits for this type of 3-D printing."
Chen said the team had also recently applied for a patent for the new technology. The research was co-authored by Ziqi Wang, previously a visiting student at USC, from the School of Computer and Communication Sciences, EPFL Switzerland, and Siyu Gong from USC Viterbi. | Researchers at the University of Southern California Viterbi School of Engineering (USC Viterbi) designed a low-cost movable surface for three-dimensional (3D) printers that reduces waste and accelerates production. The prototype platform has a programmable, dynamically controlled surface composed of movable metal pins that replace printed supports. Each individual support operates from a single motor that moves the platform. The pins elevate as the printer progressively constructs the product. USC Viterbi's Yong Chen said in tests, the device saved roughly 35% in materials usage, and was about 40% faster in printing than standard Fused Deposition Modeling 3D printers. Chen added that the system could be modified easily for large-scale manufacturing. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the University of Southern California Viterbi School of Engineering (USC Viterbi) designed a low-cost movable surface for three-dimensional (3D) printers that reduces waste and accelerates production. The prototype platform has a programmable, dynamically controlled surface composed of movable metal pins that replace printed supports. Each individual support operates from a single motor that moves the platform. The pins elevate as the printer progressively constructs the product. USC Viterbi's Yong Chen said in tests, the device saved roughly 35% in materials usage, and was about 40% faster in printing than standard Fused Deposition Modeling 3D printers. Chen added that the system could be modified easily for large-scale manufacturing.
3-D printing has the potential to revolutionize product design and manufacturing in a vast range of fields - from custom components for consumer products, to 3-D printed dental products and bone and medical implants that could save lives. However, the process also creates a large amount of expensive and unsustainable waste and takes a long time, making it difficult for 3-D printing to be implemented on a wide scale.
Each time a 3-D printer produces custom objects, especially unusually-shaped products, it also needs to print supports-printed stands that balance the object as the printer creates layer by layer, helping maintain its shape integrity. However, these supports must be manually removed after printing, which requires finishing by hand and can result in shape inaccuracies or surface roughness. The materials the supports are made from often cannot be re-used, and so they're discarded, contributing to the growing problem of 3-D printed waste material.
For the first time, researchers in USC Viterbi's Daniel J. Epstein Department of Industrial and Systems Engineering have created a low-cost reusable support method to reduce the need for 3-D printers to print these wasteful supports, vastly improving cost-effectiveness and sustainability for 3-D printing.
The work, led by Yong Chen, professor of industrial and systems engineering and PhD student Yang Xu, has been published in Additive Manufacturing .
Traditional 3-D printing using the Fused Deposition Modeling (FDM) technique, prints layer-by-layer, directly onto a static metal surface. The new prototype instead uses a programmable, dynamically-controlled surface made of moveable metal pins to replace the printed supports. The pins rise up as the printer progressively builds the product. Chen said that testing of the new prototype has shown it saves around 35% in materials used to print objects.
"I work with biomedical doctors who 3-D print using biomaterials to build tissue or organs," Chen said. "A lot of the material they use are very expensive-we're talking small bottles that cost between $500 to $1000 each."
"For standard FDM printers, the materials cost is something like $50 per kilogram, but for bioprinting, it's more like $50 per gram. So if we can save 30% on material that would have gone into printing these supports, that is a huge cost saving for 3-D printing for biomedical purposes," Chen said.
In addition to the environmental and cost impacts of material wastage, traditional 3-D printing processes using supports is also time-consuming, Chen said.
"When you're 3-D printing complex shapes, half of the time you are building the parts that you need, the other half of the time you're building the supports. So with this system, we're not building the supports. Therefore, in terms of printing time, we have a savings of about 40%."
Chen said that similar prototypes developed in the past relied on individual motors to raise each of the mechanical supports, resulting in highly energy-intensive products that were also much more expensive to purchase, and thus not cost-effective for 3-D printers.
"So if you had 100 moving pins and the cost of every motor is around $10, the whole thing is $1,000, in addition to 25 control boards to control 100 different motors. The whole thing would cost well over $10,000."
The research team's new prototype works by running each of its individual supports from a single motor that moves a platform. The platform raises groups of metal pins at the same time, making it a cost-effective solution. Based on the product design, the program's software would tell the user where they need to add a series of metal tubes into the base of the platform. The position of these tubes would then determine which pins would raise to defined heights to best support the 3-D printed product, while also creating the least amount of wastage from printed supports. At the end of the process, the pins can be easily removed without damaging the product.
Chen said the system could also be easily adapted for large scale manufacturing, such as in the automotive, aerospace and yacht industries.
"People are already building FDM printers for large size car and ship bodies, as well as for consumer products such as furniture. As you can imagine, their building times are really long - we're talking about a whole day," Chen said. "So if you can save half of that, your manufacturing time could be reduced to half a day. Using our approach could bring a lot of benefits for this type of 3-D printing."
Chen said the team had also recently applied for a patent for the new technology. The research was co-authored by Ziqi Wang, previously a visiting student at USC, from the School of Computer and Communication Sciences, EPFL Switzerland, and Siyu Gong from USC Viterbi. |
|||
623 | AI Method for Generating Proteins Will Speed Up Drug Development | Artificial Intelligence is now capable of generating novel, functionally active proteins, thanks to recently published work by researchers from Chalmers University of Technology, Sweden.
"What we are now able to demonstrate offers fantastic potential for a number of future applications, such as faster and more cost-efficient development of protein-based drugs," says Aleksej Zelezniak, Associate Professor at the Department of Biology and Biological Engineering at Chalmers.
Proteins are large, complex molecules that play a crucial role in all living cells, building, modifying, and breaking down other molecules naturally inside our cells. They are also widely used in industrial processes and products, and in our daily lives.
Protein-based drugs are very common - the diabetes drug insulin is one of the most prescribed. Some of the most expensive and effective cancer medicines are also protein-based, as well as the antibody formulas currently being used to treat COVID-19.
From computer design to working proteins in just a few weeks
Current methods used for protein engineering rely on introducing random mutations to protein sequences. However, with each additional random mutation introduced, the protein activity declines.
"Consequently, one must perform multiple rounds of very expensive and time-consuming experiments, screening millions of variants, to engineer proteins and enzymes that end up being significantly different from those found in nature," says research leader Aleksej Zelezniak, continuing:
"This engineering process is very slow, but now we have an AI-based method where we can go from computer design to working protein in just a few weeks."
The new results from the Chalmers researchers were recently published in the journal Nature Machine Intelligence and represent a breakthrough in the field of synthetic proteins. Aleksej Zelezniak's research group and collaborators have developed an AI-based approach called ProteinGAN, which uses a generative deep learning approach.
In essence, the AI is provided with a large amount of data from well-studied proteins; it studies this data and attempts to create new proteins based on it.
At the same time, another part of the AI tries to figure out if the synthetic proteins are fake or not. The proteins are sent back and forth in the system until the AI cannot tell apart natural and synthetic proteins anymore. This method is well known for creating photos and videos of people who do not exist, but in this study, it was used for producing highly diverse protein variants with naturalistic-like physical properties that could be tested for their functions.
The proteins widely used in everyday products are not always entirely natural but are made through synthetic biology and protein engineering techniques. Using these techniques, the original protein sequences are modified in the hope of creating synthetic novel protein variants that are more efficient, stable, and tailored towards particular applications. The new AI-based approach is of importance for developing efficient industrial enzymes as well as new protein-based therapies, such as antibodies and vaccines.
A cost-efficient and sustainable model Assistant Professor Martin Engqvist, also of the Department of Biology and Biological Engineering, was involved in designing the experiments to test the AI synthesised proteins.
"Accelerating the rate at which we engineer proteins is very important for driving down development costs for enzyme catalysts. This is the key for realising environmentally sustainable industrial processes and consumer products, and our AI model, as well as future models, will enable that. Our work is a vital contribution in that context" says Martin Engqvist.
"This kind of work is only possible in the type of multidisciplinary environment that exists at our Division - at the interface of computer science and biology. We have perfect conditions to experimentally test the properties of these AI-designed proteins," says Aleksej Zelezniak.
The next step for the researchers is to explore how the technology could be used for specific improvements to protein properties, such as increased stability, something which could have great benefit for proteins used in industrial technology.
More about: The research project
The study was conducted within a collaboration between Chalmers University of Technology, Sweden, Vilnius University Life Sciences Centre in Lithuania, and the company Biomatter Designs.
Read the article "Expanding functional protein sequence spaces using generative adversarial networks" in Nature Machine Intelligence.
For more information, please contact:
Aleksej Zelezniak , Associate Professor, Department of Biology and Biological Engineering, Chalmers University of Technology, Sweden
+ 46 31 772 81 71, [email protected]
Martin Engqvist, Assistant Professor, Department of Biology and Biological Engineering, Chalmers University of Technology, Sweden, [email protected]
Mia Halleröd Palmgren Press Officer +46-31-772 3252 [email protected]
________________
Chalmers University of Technology in Gothenburg, Sweden, conducts research and education in technology and natural sciences at a high international level. The university has 3100 employees and 10,000 students, and offers education in engineering, science, shipping and architecture.
With scientific excellence as a basis, Chalmers promotes knowledge and technical solutions for a sustainable world. Through global commitment and entrepreneurship, we foster an innovative spirit, in close collaboration with wider society.The EU's biggest research initiative - the Graphene Flagship - is coordinated by Chalmers. We are also leading the development of a Swedish quantum computer.
Chalmers was founded in 1829 and has the same motto today as it did then: Avancez - forward. | Researchers at Sweden's Chalmers University of Technology have developed artificial intelligence (AI) that can synthesize novel, functionally active proteins. Chalmers' Aleksej Zelezniak said the method can proceed from design to working protein in just a few weeks, much more quickly than current protein-engineering techniques. The ProteinGAN approach involves feeding the AI a large dataset of well-studied proteins, which it analyzes and attempts to generate new proteins; concurrently, another part of the AI tries to determine if the synthetic proteins are natural or not. Said Chalmers' Martin Engqvist, "Accelerating the rate at which we engineer proteins is very important for driving down development costs for enzyme catalysts. This is the key for realizing environmentally sustainable industrial processes and consumer products, and our AI model, as well as future models, will enable that." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Sweden's Chalmers University of Technology have developed artificial intelligence (AI) that can synthesize novel, functionally active proteins. Chalmers' Aleksej Zelezniak said the method can proceed from design to working protein in just a few weeks, much more quickly than current protein-engineering techniques. The ProteinGAN approach involves feeding the AI a large dataset of well-studied proteins, which it analyzes and attempts to generate new proteins; concurrently, another part of the AI tries to determine if the synthetic proteins are natural or not. Said Chalmers' Martin Engqvist, "Accelerating the rate at which we engineer proteins is very important for driving down development costs for enzyme catalysts. This is the key for realizing environmentally sustainable industrial processes and consumer products, and our AI model, as well as future models, will enable that."
Artificial Intelligence is now capable of generating novel, functionally active proteins, thanks to recently published work by researchers from Chalmers University of Technology, Sweden.
"What we are now able to demonstrate offers fantastic potential for a number of future applications, such as faster and more cost-efficient development of protein-based drugs," says Aleksej Zelezniak, Associate Professor at the Department of Biology and Biological Engineering at Chalmers.
Proteins are large, complex molecules that play a crucial role in all living cells, building, modifying, and breaking down other molecules naturally inside our cells. They are also widely used in industrial processes and products, and in our daily lives.
Protein-based drugs are very common - the diabetes drug insulin is one of the most prescribed. Some of the most expensive and effective cancer medicines are also protein-based, as well as the antibody formulas currently being used to treat COVID-19.
From computer design to working proteins in just a few weeks
Current methods used for protein engineering rely on introducing random mutations to protein sequences. However, with each additional random mutation introduced, the protein activity declines.
"Consequently, one must perform multiple rounds of very expensive and time-consuming experiments, screening millions of variants, to engineer proteins and enzymes that end up being significantly different from those found in nature," says research leader Aleksej Zelezniak, continuing:
"This engineering process is very slow, but now we have an AI-based method where we can go from computer design to working protein in just a few weeks."
The new results from the Chalmers researchers were recently published in the journal Nature Machine Intelligence and represent a breakthrough in the field of synthetic proteins. Aleksej Zelezniak's research group and collaborators have developed an AI-based approach called ProteinGAN, which uses a generative deep learning approach.
In essence, the AI is provided with a large amount of data from well-studied proteins; it studies this data and attempts to create new proteins based on it.
At the same time, another part of the AI tries to figure out if the synthetic proteins are fake or not. The proteins are sent back and forth in the system until the AI cannot tell apart natural and synthetic proteins anymore. This method is well known for creating photos and videos of people who do not exist, but in this study, it was used for producing highly diverse protein variants with naturalistic-like physical properties that could be tested for their functions.
The proteins widely used in everyday products are not always entirely natural but are made through synthetic biology and protein engineering techniques. Using these techniques, the original protein sequences are modified in the hope of creating synthetic novel protein variants that are more efficient, stable, and tailored towards particular applications. The new AI-based approach is of importance for developing efficient industrial enzymes as well as new protein-based therapies, such as antibodies and vaccines.
A cost-efficient and sustainable model Assistant Professor Martin Engqvist, also of the Department of Biology and Biological Engineering, was involved in designing the experiments to test the AI synthesised proteins.
"Accelerating the rate at which we engineer proteins is very important for driving down development costs for enzyme catalysts. This is the key for realising environmentally sustainable industrial processes and consumer products, and our AI model, as well as future models, will enable that. Our work is a vital contribution in that context" says Martin Engqvist.
"This kind of work is only possible in the type of multidisciplinary environment that exists at our Division - at the interface of computer science and biology. We have perfect conditions to experimentally test the properties of these AI-designed proteins," says Aleksej Zelezniak.
The next step for the researchers is to explore how the technology could be used for specific improvements to protein properties, such as increased stability, something which could have great benefit for proteins used in industrial technology.
More about: The research project
The study was conducted within a collaboration between Chalmers University of Technology, Sweden, Vilnius University Life Sciences Centre in Lithuania, and the company Biomatter Designs.
Read the article "Expanding functional protein sequence spaces using generative adversarial networks" in Nature Machine Intelligence.
For more information, please contact:
Aleksej Zelezniak , Associate Professor, Department of Biology and Biological Engineering, Chalmers University of Technology, Sweden
+ 46 31 772 81 71, [email protected]
Martin Engqvist, Assistant Professor, Department of Biology and Biological Engineering, Chalmers University of Technology, Sweden, [email protected]
Mia Halleröd Palmgren Press Officer +46-31-772 3252 [email protected]
________________
Chalmers University of Technology in Gothenburg, Sweden, conducts research and education in technology and natural sciences at a high international level. The university has 3100 employees and 10,000 students, and offers education in engineering, science, shipping and architecture.
With scientific excellence as a basis, Chalmers promotes knowledge and technical solutions for a sustainable world. Through global commitment and entrepreneurship, we foster an innovative spirit, in close collaboration with wider society.The EU's biggest research initiative - the Graphene Flagship - is coordinated by Chalmers. We are also leading the development of a Swedish quantum computer.
Chalmers was founded in 1829 and has the same motto today as it did then: Avancez - forward. |
|||
624 | Robot Guide Dog Could Help People Who Are Blind Navigate | By Matthew Sparkes
Guide dogs offer social, physical and mental benefits for some people who are blind, but training them is a costly and lengthy process , so researchers have created a robotic alternative.
Zhongyu Li at the University of California, Berkeley, and his colleagues programmed a four-legged, dog-like robot to safely guide people with a lead, even when faced with obstacles and narrow passages.
The robotic guide dog Anxing Xiao et al./University of California, Berkeley
The researchers equipped an existing robot with a laser-ranging system to create an accurate map of its surroundings. They also added a rotating camera that remains ... | University of California, Berkeley (UC Berkeley) researchers have programmed a four-legged robot dog to guide blind people with a lead. The team outfitted an existing robot with a laser-ranging system to generate an environmental map, and a rotating camera that stays pointed at the person the robot is guiding to pinpoint their relative position. When a start point and an end point are inputted, the robot's software maps a route with waypoints, then calculates movement on the fly based on obstacles and the behavior of the person it is leading. UC Berkeley's Zhongyu Li said, "As time goes by and the hardware becomes more affordable, we can actually use this kind of dog to help, to serve, humans." | [] | [] | [] | scitechnews | None | None | None | None | University of California, Berkeley (UC Berkeley) researchers have programmed a four-legged robot dog to guide blind people with a lead. The team outfitted an existing robot with a laser-ranging system to generate an environmental map, and a rotating camera that stays pointed at the person the robot is guiding to pinpoint their relative position. When a start point and an end point are inputted, the robot's software maps a route with waypoints, then calculates movement on the fly based on obstacles and the behavior of the person it is leading. UC Berkeley's Zhongyu Li said, "As time goes by and the hardware becomes more affordable, we can actually use this kind of dog to help, to serve, humans."
By Matthew Sparkes
Guide dogs offer social, physical and mental benefits for some people who are blind, but training them is a costly and lengthy process , so researchers have created a robotic alternative.
Zhongyu Li at the University of California, Berkeley, and his colleagues programmed a four-legged, dog-like robot to safely guide people with a lead, even when faced with obstacles and narrow passages.
The robotic guide dog Anxing Xiao et al./University of California, Berkeley
The researchers equipped an existing robot with a laser-ranging system to create an accurate map of its surroundings. They also added a rotating camera that remains ... |
|||
625 | Technology Keeps Senior Center Residents Connected During Pandemic | Senior living facilities, whose residents live in their own rooms or apartments but share common areas, are looking to balance the benefits of technology by promoting social interactions as the pace of coronavirus vaccinations increases. PA-based Acts Retirement-Life Communities Inc., for instance, has rolled out robot assistants that can deliver meals or serve as mobile devices for visual and audio communications. It also has inked a deal with software firm K4Connect to employ motion-detection sensors to alert staff to residents getting out of bed and voice-activated controls to allow residents to control their lights, thermostats, and televisions. K4Connect's Scott Moody said, "What we really want to do is foster that physical engagement." | [] | [] | [] | scitechnews | None | None | None | None | Senior living facilities, whose residents live in their own rooms or apartments but share common areas, are looking to balance the benefits of technology by promoting social interactions as the pace of coronavirus vaccinations increases. PA-based Acts Retirement-Life Communities Inc., for instance, has rolled out robot assistants that can deliver meals or serve as mobile devices for visual and audio communications. It also has inked a deal with software firm K4Connect to employ motion-detection sensors to alert staff to residents getting out of bed and voice-activated controls to allow residents to control their lights, thermostats, and televisions. K4Connect's Scott Moody said, "What we really want to do is foster that physical engagement."
|
||||
626 | AI Tool 85% Accurate at Recognizing, Classifying Wind Turbine Blade Defects | From visual thermography to ultrasound, a wide range of blade inspection techniques have been trialled, but all have displayed drawbacks.
Most inspection processes still require engineers to carry out manual examinations that involve capturing a large number of high-resolution images. Such inspections are not only time-consuming and impacted by light conditions, but they are also hazardous.
Computer scientists at Loughborough University have developed a new tool that uses artificial intelligence (AI) to analyse images of wind turbine blades to locate and highlight areas of defect.
And better yet, the system, which has received support and input from software solutions provider Railston & Co Ltd, has been 'trained' to classify defects by type - such as crack, erosion, void, and 'other' - which has the potential to lead to faster and more appropriate responses.
The proposed tool can currently analyse images and videos captured from inspections that are carried out manually or with drones.
Future research will further explore using the AI tool with drones in a bid to eliminate the need for manual inspections.
Research leads Dr Georgina Cosma and PhD student Jiajun Zhang trained the AI system to detect different types of defects using a dataset of 923 images captured by Railston & Co Ltd , the project's industrial partner.
Using image enhancement and augmentation methods, and AI algorithms (namely the Mask R-CNN deep learning algorithm), the system analyses images then highlights defect areas and labels them by type.
After developing the system, the researchers put it to the test by inputting 223 new images. The proposed tool achieved around 85% test accuracy for the task of recognising and classifying wind turbine blade defects.
The results have been published in a paper, titled ' Image Enhanced Mask R-CNN: A Deep Learning Pipeline with New Evaluation Measures for Wind Turbine Blade Defect Detection and Classification ', in the Journal of Imaging.
The paper also proposes a new set of measures for evaluating defect detection systems, which is much needed given AI-based defect detection and existing systems are still in their infancy.
Of the research, Dr Cosma, the project lead, said: "AI is a powerful tool for defect detection and analysis, whether the defects are on wind turbine blades or other surfaces.
"Using AI, we can automate the process of identifying and assessing damages, making better use of experts' time and efforts.
"Of course, to build AI models we need images that have been labelled by engineers, and Railston & Co ltd are providing such images and expertise, making this project feasible."
Jiajun Zhang added: "Defect detection is a challenging task for AI, since defects of the same type can vary in size and shape, and each image is captured in different conditions (e.g. light, shield, image temperature, etc.).
"The images are pre-processed to enhance the AI-based detection process and currently, we are working on increasing accuracy further by exploring improvements to pre-processing the images and extending the AI algorithm."
Jason Watkins, of Railston & Co Ltd, says the company is "encouraged by the results from the team at Loughborough University."
He said: "AI has the potential to transform the world of industrial inspection and maintenance. As well as classifying the type of damage we are planning to develop new algorithms that will better detect the severity of the damage as well as the size and its location in space.
"We hope this will translate into better cost forecasting for our clients."
As well as further exploring how the tech can be used with drone inspections, the Loughborough experts plan to build on the research by training the system to detect the severity of defects. They are also hoping to evaluate the performance of the tool on other surfaces.
To read the research paper in its entirety, click here .
This research is funded through the EPSRC Centre for Doctoral Training in Embedded Intelligence, with industrial support from Railston & Co Ltd. | An artificial intelligence (AI) tool developed by researchers at the U.K.'s Loughborough University can analyze images of wind turbine blades to identify defects that could affect their efficiency. The system uses images captured from manual or drone inspections, image enhancement and augmentation methods, and AI algorithms like the Mask R-CNN deep learning algorithm to highlight defects and classify them by type, including crack, erosion, void, or "other." A dataset of 923 images was used to train the AI system. In a subsequent test of 223 new images, the researchers determined the system was about 85% accurate in recognizing and classifying defects. Researcher Georgina Cosma said, "Using AI, we can automate the process of identifying and assessing damages, making better use of experts' time and efforts." | [] | [] | [] | scitechnews | None | None | None | None | An artificial intelligence (AI) tool developed by researchers at the U.K.'s Loughborough University can analyze images of wind turbine blades to identify defects that could affect their efficiency. The system uses images captured from manual or drone inspections, image enhancement and augmentation methods, and AI algorithms like the Mask R-CNN deep learning algorithm to highlight defects and classify them by type, including crack, erosion, void, or "other." A dataset of 923 images was used to train the AI system. In a subsequent test of 223 new images, the researchers determined the system was about 85% accurate in recognizing and classifying defects. Researcher Georgina Cosma said, "Using AI, we can automate the process of identifying and assessing damages, making better use of experts' time and efforts."
From visual thermography to ultrasound, a wide range of blade inspection techniques have been trialled, but all have displayed drawbacks.
Most inspection processes still require engineers to carry out manual examinations that involve capturing a large number of high-resolution images. Such inspections are not only time-consuming and impacted by light conditions, but they are also hazardous.
Computer scientists at Loughborough University have developed a new tool that uses artificial intelligence (AI) to analyse images of wind turbine blades to locate and highlight areas of defect.
And better yet, the system, which has received support and input from software solutions provider Railston & Co Ltd, has been 'trained' to classify defects by type - such as crack, erosion, void, and 'other' - which has the potential to lead to faster and more appropriate responses.
The proposed tool can currently analyse images and videos captured from inspections that are carried out manually or with drones.
Future research will further explore using the AI tool with drones in a bid to eliminate the need for manual inspections.
Research leads Dr Georgina Cosma and PhD student Jiajun Zhang trained the AI system to detect different types of defects using a dataset of 923 images captured by Railston & Co Ltd , the project's industrial partner.
Using image enhancement and augmentation methods, and AI algorithms (namely the Mask R-CNN deep learning algorithm), the system analyses images then highlights defect areas and labels them by type.
After developing the system, the researchers put it to the test by inputting 223 new images. The proposed tool achieved around 85% test accuracy for the task of recognising and classifying wind turbine blade defects.
The results have been published in a paper, titled ' Image Enhanced Mask R-CNN: A Deep Learning Pipeline with New Evaluation Measures for Wind Turbine Blade Defect Detection and Classification ', in the Journal of Imaging.
The paper also proposes a new set of measures for evaluating defect detection systems, which is much needed given AI-based defect detection and existing systems are still in their infancy.
Of the research, Dr Cosma, the project lead, said: "AI is a powerful tool for defect detection and analysis, whether the defects are on wind turbine blades or other surfaces.
"Using AI, we can automate the process of identifying and assessing damages, making better use of experts' time and efforts.
"Of course, to build AI models we need images that have been labelled by engineers, and Railston & Co ltd are providing such images and expertise, making this project feasible."
Jiajun Zhang added: "Defect detection is a challenging task for AI, since defects of the same type can vary in size and shape, and each image is captured in different conditions (e.g. light, shield, image temperature, etc.).
"The images are pre-processed to enhance the AI-based detection process and currently, we are working on increasing accuracy further by exploring improvements to pre-processing the images and extending the AI algorithm."
Jason Watkins, of Railston & Co Ltd, says the company is "encouraged by the results from the team at Loughborough University."
He said: "AI has the potential to transform the world of industrial inspection and maintenance. As well as classifying the type of damage we are planning to develop new algorithms that will better detect the severity of the damage as well as the size and its location in space.
"We hope this will translate into better cost forecasting for our clients."
As well as further exploring how the tech can be used with drone inspections, the Loughborough experts plan to build on the research by training the system to detect the severity of defects. They are also hoping to evaluate the performance of the tool on other surfaces.
To read the research paper in its entirety, click here .
This research is funded through the EPSRC Centre for Doctoral Training in Embedded Intelligence, with industrial support from Railston & Co Ltd. |
|||
628 | Study Shows Promise of Quantum Computing Using Factory-Made Silicon Chips | The qubit is the building block of quantum computing, analogous to the bit in classical computers. To perform error-free calculations, quantum computers of the future are likely to need at least millions of qubits. The latest study, published in the journal PRX Quantum , suggests that these computers could be made with industrial-grade silicon chips using existing manufacturing processes, instead of adopting new manufacturing processes or even newly discovered particles.
For the study, researchers were able to isolate and measure the quantum state of a single electron (the qubit) in a silicon transistor manufactured using a 'CMOS' technology similar to that used to make chips in computer processors. Furthermore, the spin of the electron was found to remain stable for a period of up to nine seconds. The next step is to use a similar manufacturing technology to show how an array of qubits can interact to perform quantum logic operations.
Professor John Morton (London Centre for Nanotechnology at UCL), co-founder of Quantum Motion, said: "We're hacking the process of creating qubits, so the same kind of technology that makes the chip in a smartphone can be used to build quantum computers.
"It has taken 70 years for transistor development to reach where we are today in computing and we can't spend another 70 years trying to invent new manufacturing processes to build quantum computers. We need millions of qubits and an ultra-scalable architecture for building them, our discovery gives us a blueprint to shortcut our way to industrial scale quantum chip production."
The experiments were performed by PhD student Virginia Ciriano Tejel (London Centre for Nanotechnology at UCL) and colleagues working in a low-temperature laboratory. During operation, the chips are kept in a refrigerated state, cooled to a fraction of a degree above absolute zero (−273 degrees Celsius).
Ms Ciriano Tejel said: "Every physics student learns in textbooks that electrons behave like tiny magnets with weird quantum properties, but nothing prepares you for the feeling of wonder in the lab, being able to watch this 'spin' of a single electron with your own eyes, sometimes pointing up, sometimes down. It's thrilling to be a scientist trying to understand the world and at the same time be part of the development of quantum computers."
A quantum computer harnesses laws of physics that are normally seen only at the atomic and subatomic level (for instance, that particles can be in two places simultaneously). Quantum computers could be more powerful than today's super computers and capable of performing complex calculations that are otherwise practically impossible.
While the applications of quantum computing differ from traditional computers, they will enable us to be more accurate and faster in hugely challenging areas such as drug development and tackling climate change, as well as more everyday problems that have huge numbers of variables - just as in nature - such as transport and logistics.
T: +44 (0) 7990 675947
E: m.greaves [at] ucl.ac.uk | U.K.-based University College London (UCL) spinout company Quantum Motion has demonstrated a single quantum-capable bit (qubit) on a standard silicon transistor chip. In a study led by UCL and Oxford University researchers, the team isolated and measured the qubit's quantum state in a silicon transistor manufactured using complementary metal-oxide semiconductor (CMOS) technology similar to that used to fabricate standard chips. The researchers found the qubit's spin remained stable for up to nine seconds. UCL's John Morton said, "We need millions of qubits and an ultra-scalable architecture for building them. Our discovery gives us a blueprint to shortcut our way to industrial-scale quantum chip production." | [] | [] | [] | scitechnews | None | None | None | None | U.K.-based University College London (UCL) spinout company Quantum Motion has demonstrated a single quantum-capable bit (qubit) on a standard silicon transistor chip. In a study led by UCL and Oxford University researchers, the team isolated and measured the qubit's quantum state in a silicon transistor manufactured using complementary metal-oxide semiconductor (CMOS) technology similar to that used to fabricate standard chips. The researchers found the qubit's spin remained stable for up to nine seconds. UCL's John Morton said, "We need millions of qubits and an ultra-scalable architecture for building them. Our discovery gives us a blueprint to shortcut our way to industrial-scale quantum chip production."
The qubit is the building block of quantum computing, analogous to the bit in classical computers. To perform error-free calculations, quantum computers of the future are likely to need at least millions of qubits. The latest study, published in the journal PRX Quantum , suggests that these computers could be made with industrial-grade silicon chips using existing manufacturing processes, instead of adopting new manufacturing processes or even newly discovered particles.
For the study, researchers were able to isolate and measure the quantum state of a single electron (the qubit) in a silicon transistor manufactured using a 'CMOS' technology similar to that used to make chips in computer processors. Furthermore, the spin of the electron was found to remain stable for a period of up to nine seconds. The next step is to use a similar manufacturing technology to show how an array of qubits can interact to perform quantum logic operations.
Professor John Morton (London Centre for Nanotechnology at UCL), co-founder of Quantum Motion, said: "We're hacking the process of creating qubits, so the same kind of technology that makes the chip in a smartphone can be used to build quantum computers.
"It has taken 70 years for transistor development to reach where we are today in computing and we can't spend another 70 years trying to invent new manufacturing processes to build quantum computers. We need millions of qubits and an ultra-scalable architecture for building them, our discovery gives us a blueprint to shortcut our way to industrial scale quantum chip production."
The experiments were performed by PhD student Virginia Ciriano Tejel (London Centre for Nanotechnology at UCL) and colleagues working in a low-temperature laboratory. During operation, the chips are kept in a refrigerated state, cooled to a fraction of a degree above absolute zero (−273 degrees Celsius).
Ms Ciriano Tejel said: "Every physics student learns in textbooks that electrons behave like tiny magnets with weird quantum properties, but nothing prepares you for the feeling of wonder in the lab, being able to watch this 'spin' of a single electron with your own eyes, sometimes pointing up, sometimes down. It's thrilling to be a scientist trying to understand the world and at the same time be part of the development of quantum computers."
A quantum computer harnesses laws of physics that are normally seen only at the atomic and subatomic level (for instance, that particles can be in two places simultaneously). Quantum computers could be more powerful than today's super computers and capable of performing complex calculations that are otherwise practically impossible.
While the applications of quantum computing differ from traditional computers, they will enable us to be more accurate and faster in hugely challenging areas such as drug development and tackling climate change, as well as more everyday problems that have huge numbers of variables - just as in nature - such as transport and logistics.
T: +44 (0) 7990 675947
E: m.greaves [at] ucl.ac.uk |
|||
629 | How Fortnite, Zelda Can Up Your Surgical Game (No Joke!) | Scalpel? Check. Gaming console? Check: Study finds video games can be a new tool on surgical tray for medical students
Video games offer students obvious respite from the stresses of studies and, now, a study from a University of Ottawa medical student has found they could benefit surgical skills training.
Arnav Gupta carries a heavy course load as a third-year student in the Faculty of Medicine, so winding down with a game of Legend of Zelda always provides relief from the rigorous of study. But Zelda may be helping improve his surgical education, too, as Gupta and a team of researchers from the University of Toronto found in a paper they recently published in the medical journal Surgery . "Given the limited availability of simulators and the high accessibility of video games, medical students interested in surgical specialties should know that video games may be a valuable adjunct training for enhancing their medical education, especially in surgical specialties where it can be critical," says Gupta, whose findings were deciphered from a systematic review of 16 studies involving 575 participants.
"Particularly, in robotic surgery, being a video gamer was associated with improvements in time to completion, economy of motion, and overall performance. In laparoscopic surgery, video games-based training was associated with improvement in duration on certain tasks, economy of motion, accuracy, and overall performance," explains Gupta, who has been a gamer since age 8.
This study builds on past reviews and is the first to focus on a specific medical student population where this style of training could be feasibly implemented. Their timely study found some of the most beneficial games for students of robotic surgery and laparoscopy were: Super Monkey Ball, Half Life, Rocket League and Underground . Underground is purposely designed to assist medical students with their robotic surgery training via a video game console.
"While video games can never replace the value of first-hand experience, they do have merit as an adjunctive tool, especially when attempting to replicate important movements to surgery. For example, first-person shooting games require you to translate three dimensional motions onto a two-dimensional screen, which is like the concept of laparoscopic surgery," says Gupta, whose studies are focused on surgery in ophthalmology, which makes games like Resident Evil 4 or Trauma Center: New Blood fitted for his own ambitions.
"I'm not joking when I say that games such as Fortnite have the potential to enhance those necessary movements, providing stronger motivational components and in a low stakes environment."
Reports suggested 55 percent of university students are gamers and enjoy proficiency with video consoles. Yet, many medical students don't admit to owning and using a gaming console.
"I think there definitely is some ambivalence towards video games in medicine," says Gupta, who is also a fan of Witcher 3 . "Given how accessible games have become and how video game technology is advancing, video games definitely are an easy go-to for the students who do love them in some capacity. The hope is that maybe this study can inspire someone to take advantage of video games' unique capabilities, reduce the general ambivalence towards it, and develop some fun ways to let students engage with surgical education."
For media requests: Paul Logothetis Media Relations Agent Cell: 613.863.7221 [email protected] | Researchers at Canada's universities of Ottawa (UOttawa) and Toronto (U of T) suggest video games could be a beneficial tool for training surgeons. UOttawa's Arnav Gupta and colleagues at U of T reviewed 16 studies involving 575 participants; Gupta said video-game expertise was associated with improvements in time to completion, economy of motion, and overall performance during robotic surgery. Video game-based training also was linked to improvement in duration on certain tasks, economy of motion, accuracy, and overall performance in laparoscopic surgery. Said Gupta, "While video games can never replace the value of first-hand experience, they do have merit as an adjunctive tool, especially when attempting to replicate important movements to surgery." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Canada's universities of Ottawa (UOttawa) and Toronto (U of T) suggest video games could be a beneficial tool for training surgeons. UOttawa's Arnav Gupta and colleagues at U of T reviewed 16 studies involving 575 participants; Gupta said video-game expertise was associated with improvements in time to completion, economy of motion, and overall performance during robotic surgery. Video game-based training also was linked to improvement in duration on certain tasks, economy of motion, accuracy, and overall performance in laparoscopic surgery. Said Gupta, "While video games can never replace the value of first-hand experience, they do have merit as an adjunctive tool, especially when attempting to replicate important movements to surgery."
Scalpel? Check. Gaming console? Check: Study finds video games can be a new tool on surgical tray for medical students
Video games offer students obvious respite from the stresses of studies and, now, a study from a University of Ottawa medical student has found they could benefit surgical skills training.
Arnav Gupta carries a heavy course load as a third-year student in the Faculty of Medicine, so winding down with a game of Legend of Zelda always provides relief from the rigorous of study. But Zelda may be helping improve his surgical education, too, as Gupta and a team of researchers from the University of Toronto found in a paper they recently published in the medical journal Surgery . "Given the limited availability of simulators and the high accessibility of video games, medical students interested in surgical specialties should know that video games may be a valuable adjunct training for enhancing their medical education, especially in surgical specialties where it can be critical," says Gupta, whose findings were deciphered from a systematic review of 16 studies involving 575 participants.
"Particularly, in robotic surgery, being a video gamer was associated with improvements in time to completion, economy of motion, and overall performance. In laparoscopic surgery, video games-based training was associated with improvement in duration on certain tasks, economy of motion, accuracy, and overall performance," explains Gupta, who has been a gamer since age 8.
This study builds on past reviews and is the first to focus on a specific medical student population where this style of training could be feasibly implemented. Their timely study found some of the most beneficial games for students of robotic surgery and laparoscopy were: Super Monkey Ball, Half Life, Rocket League and Underground . Underground is purposely designed to assist medical students with their robotic surgery training via a video game console.
"While video games can never replace the value of first-hand experience, they do have merit as an adjunctive tool, especially when attempting to replicate important movements to surgery. For example, first-person shooting games require you to translate three dimensional motions onto a two-dimensional screen, which is like the concept of laparoscopic surgery," says Gupta, whose studies are focused on surgery in ophthalmology, which makes games like Resident Evil 4 or Trauma Center: New Blood fitted for his own ambitions.
"I'm not joking when I say that games such as Fortnite have the potential to enhance those necessary movements, providing stronger motivational components and in a low stakes environment."
Reports suggested 55 percent of university students are gamers and enjoy proficiency with video consoles. Yet, many medical students don't admit to owning and using a gaming console.
"I think there definitely is some ambivalence towards video games in medicine," says Gupta, who is also a fan of Witcher 3 . "Given how accessible games have become and how video game technology is advancing, video games definitely are an easy go-to for the students who do love them in some capacity. The hope is that maybe this study can inspire someone to take advantage of video games' unique capabilities, reduce the general ambivalence towards it, and develop some fun ways to let students engage with surgical education."
For media requests: Paul Logothetis Media Relations Agent Cell: 613.863.7221 [email protected] |
|||
630 | AI-Based Tool Detects Bipolar Disorder at Earlier Stages | Many people with early-stage or first-episode bipolar disorder have cognitive deficits, such as issues with visual processing and spatial memory, but those deficits are often so subtle that the disorder can go undiagnosed for years. That could change thanks to researchers at the University of Alberta who have created a machine learning model that helps identify these subtle deficits with the goal of intervening earlier.
The study was led by Jeffrey Sawalha, a doctoral student who collaborates with the U of A's Computational Psychiatry Research Group .
Earlier diagnosis is crucial for patients with bipolar disorder as this allows psychiatrists to treat them sooner, before symptoms worsen. Evidence suggests that patients respond more strongly in the early stages to treatment with lithium.
"If you can use the cognitive test and machine learning to detect the subtle form, to prevent the progression or the emergence of a manic episode, that's the key. We obviously cannot prevent all cases, but it may be a huge benefit for some individuals," said Bo Cao , an assistant professor in the Department of Psychiatry and member of the computational psychiatry group, which also includes Russ Greiner and Andrew Greenshaw . All three are also members of the Neuroscience and Mental Health Institute .
The group trained its machine learning model by comparing patients with chronic bipolar disorder to healthy control individuals, and then demonstrated that this learned model could distinguish first-episode bipolar disorder patients from healthy controls with 76 per cent accuracy. The resulting tool can examine early markers of cognitive deficits, which can then be used for early detection of bipolar disorder.
The U of A researchers worked with collaborators in China, who collected the data used in the machine learning model. The data were obtained using tests that targeted cognitive function. In this study the patients were supervised as they completed the tests, but most of the tests could be done virtually using a tablet.
This is in stark contrast to the current practice of obtaining information through machines such as MRIs, which provide images of the brain's structure. According to the researchers, a simple cognitive test analyzed through machine learning can yield equally valuable data.
"If we can get the same information for pennies versus hundreds of dollars, immediately versus three weeks from now, in a stress-free environment versus a stressful hospital environment, it's a win all around," said Greiner, a professor in the Faculty of Science and fellow-in-residence at the Alberta Machine Intelligence Institute (Amii).
The new tool is also beneficial in monitoring a patient's progression over time. "When it comes to followups, that information is also easier to collect. It's an easy way of monitoring symptoms," said Sawalha.
Greenshaw stressed the importance of consistent, standardized data collection to help cultivate these types of machine learning models.
"These models are wonderful, but you need the data to build the models, and one of the things that we predict will happen with the application of machine learning is it will push the health system towards collecting better evidence-based measures," said Greenshaw, a professor and associate chair in the Department of Psychiatry.
"For example, in other work we've done with antidepressant drugs, a psychiatrist trying to decide which drug to use has about a 50 per cent chance of getting it right. Applying machine learning, you can get that probability much higher, but you have to get that buy-in from physicians."
The next steps in this line of research are to validate the model with a larger group of people to obtain a more comprehensive data set. The researchers are also interested in potentially using the model to look at psychotic features in schizophrenia and examining the differences between cognitive deficits in schizophrenia and bipolar disorder.
For patients with psychiatric disorders, problems with cognitive and social functioning often are as bothersome as the symptoms themselves, said the researchers. By the time conventional testing reveals the problems, the patient's quality of life may already have deteriorated. By finding subtle cognitive deficits earlier, the new tool offers hope for a better quality of life for patients.
The study, " Individualized identification of first-episode bipolar disorder using machine learning and cognitive tests ," was published in the Journal of Affective Disorders. | A machine learning (ML) model developed by researchers at Canada's University of Alberta (UAlberta) and Chinese colleagues can help to identify subtle cognitive deficits that signify early-stage or first-episode bipolar disorder. The team trained the model by comparing patients with chronic bipolar disorder to healthy controls, then showed that the model could differentiate first-episode bipolar disorder patients from controls with 76% accuracy. The researchers think a cognitive test that uses ML analysis is a far less expensive and time-consuming technique for diagnosing bipolar disorder than brain imaging, and it can also monitor symptoms over time. | [] | [] | [] | scitechnews | None | None | None | None | A machine learning (ML) model developed by researchers at Canada's University of Alberta (UAlberta) and Chinese colleagues can help to identify subtle cognitive deficits that signify early-stage or first-episode bipolar disorder. The team trained the model by comparing patients with chronic bipolar disorder to healthy controls, then showed that the model could differentiate first-episode bipolar disorder patients from controls with 76% accuracy. The researchers think a cognitive test that uses ML analysis is a far less expensive and time-consuming technique for diagnosing bipolar disorder than brain imaging, and it can also monitor symptoms over time.
Many people with early-stage or first-episode bipolar disorder have cognitive deficits, such as issues with visual processing and spatial memory, but those deficits are often so subtle that the disorder can go undiagnosed for years. That could change thanks to researchers at the University of Alberta who have created a machine learning model that helps identify these subtle deficits with the goal of intervening earlier.
The study was led by Jeffrey Sawalha, a doctoral student who collaborates with the U of A's Computational Psychiatry Research Group .
Earlier diagnosis is crucial for patients with bipolar disorder as this allows psychiatrists to treat them sooner, before symptoms worsen. Evidence suggests that patients respond more strongly in the early stages to treatment with lithium.
"If you can use the cognitive test and machine learning to detect the subtle form, to prevent the progression or the emergence of a manic episode, that's the key. We obviously cannot prevent all cases, but it may be a huge benefit for some individuals," said Bo Cao , an assistant professor in the Department of Psychiatry and member of the computational psychiatry group, which also includes Russ Greiner and Andrew Greenshaw . All three are also members of the Neuroscience and Mental Health Institute .
The group trained its machine learning model by comparing patients with chronic bipolar disorder to healthy control individuals, and then demonstrated that this learned model could distinguish first-episode bipolar disorder patients from healthy controls with 76 per cent accuracy. The resulting tool can examine early markers of cognitive deficits, which can then be used for early detection of bipolar disorder.
The U of A researchers worked with collaborators in China, who collected the data used in the machine learning model. The data were obtained using tests that targeted cognitive function. In this study the patients were supervised as they completed the tests, but most of the tests could be done virtually using a tablet.
This is in stark contrast to the current practice of obtaining information through machines such as MRIs, which provide images of the brain's structure. According to the researchers, a simple cognitive test analyzed through machine learning can yield equally valuable data.
"If we can get the same information for pennies versus hundreds of dollars, immediately versus three weeks from now, in a stress-free environment versus a stressful hospital environment, it's a win all around," said Greiner, a professor in the Faculty of Science and fellow-in-residence at the Alberta Machine Intelligence Institute (Amii).
The new tool is also beneficial in monitoring a patient's progression over time. "When it comes to followups, that information is also easier to collect. It's an easy way of monitoring symptoms," said Sawalha.
Greenshaw stressed the importance of consistent, standardized data collection to help cultivate these types of machine learning models.
"These models are wonderful, but you need the data to build the models, and one of the things that we predict will happen with the application of machine learning is it will push the health system towards collecting better evidence-based measures," said Greenshaw, a professor and associate chair in the Department of Psychiatry.
"For example, in other work we've done with antidepressant drugs, a psychiatrist trying to decide which drug to use has about a 50 per cent chance of getting it right. Applying machine learning, you can get that probability much higher, but you have to get that buy-in from physicians."
The next steps in this line of research are to validate the model with a larger group of people to obtain a more comprehensive data set. The researchers are also interested in potentially using the model to look at psychotic features in schizophrenia and examining the differences between cognitive deficits in schizophrenia and bipolar disorder.
For patients with psychiatric disorders, problems with cognitive and social functioning often are as bothersome as the symptoms themselves, said the researchers. By the time conventional testing reveals the problems, the patient's quality of life may already have deteriorated. By finding subtle cognitive deficits earlier, the new tool offers hope for a better quality of life for patients.
The study, " Individualized identification of first-episode bipolar disorder using machine learning and cognitive tests ," was published in the Journal of Affective Disorders. |
|||
631 | Japan to Join EU, China in Issuing Digital Vaccine Passport | Like China and the European Union, Japan will issue digital vaccine passports to citizens who have been immunized against the coronavirus, in order to facilitate international travel. The Japanese government could add the digital health certificate to an app slated for release in April that would hold digital certificates for negative test results and connect to a new system that tracks progress in the government's vaccination program. The app would allow citizens to provide proof of vaccination to board a plane or check into a hotel. The Japanese government will consider EU vaccination certificates and the "CommonPass" universal digital certificate in crafting its certification standards. | [] | [] | [] | scitechnews | None | None | None | None | Like China and the European Union, Japan will issue digital vaccine passports to citizens who have been immunized against the coronavirus, in order to facilitate international travel. The Japanese government could add the digital health certificate to an app slated for release in April that would hold digital certificates for negative test results and connect to a new system that tracks progress in the government's vaccination program. The app would allow citizens to provide proof of vaccination to board a plane or check into a hotel. The Japanese government will consider EU vaccination certificates and the "CommonPass" universal digital certificate in crafting its certification standards.
|
||||
632 | Robots Could Replace Hundreds of Thousands of Oil, Gas Jobs by 2030 | Norwegian energy research firm Rystad Energy predicted robotics and automation could replace hundreds of thousands of oil and gas workers worldwide and sharply slash the industry's labor costs by 2030. The company said at least 20% of drilling, operational support, and maintenance jobs could be automated in the next decade, replacing more than 140,000 workers in the U.S. alone. Rystad calculated robotic drilling systems can potentially cut the number of roughnecks on drilling platforms by 20% to 30%, with U.S. wage costs reduced by more than $7 billion by 2030. Rystad expects technical issues like long-term reliability, and labor organizations' opposition, will delay full robotic adoption. | [] | [] | [] | scitechnews | None | None | None | None | Norwegian energy research firm Rystad Energy predicted robotics and automation could replace hundreds of thousands of oil and gas workers worldwide and sharply slash the industry's labor costs by 2030. The company said at least 20% of drilling, operational support, and maintenance jobs could be automated in the next decade, replacing more than 140,000 workers in the U.S. alone. Rystad calculated robotic drilling systems can potentially cut the number of roughnecks on drilling platforms by 20% to 30%, with U.S. wage costs reduced by more than $7 billion by 2030. Rystad expects technical issues like long-term reliability, and labor organizations' opposition, will delay full robotic adoption.
|
||||
633 | Computer Model Shows Early Death of Nerve Cells Is Crucial to Form Healthy Brains | A computer model developed by scientists at the U.K.'s University of Surrey, Newcastle University, and Nottingham University can simulate cell division, cell migration, and cell death (apoptosis), and showed how slight changes in the performance of cell division and apoptosis induce development of cortical structures in neurodevelopmental disorders. Surrey's Roman Bauer said the goal is to create a comprehensive computational model of the cerebral cortex and its development, accounting for neuronal behavior and organization. Nottingham's Marcus Kaiser said, "The team's results showed that cell death plays an essential role in developing the brain, as it influences the thickness of the cortex's layers, variety, and layer cell density." | [] | [] | [] | scitechnews | None | None | None | None | A computer model developed by scientists at the U.K.'s University of Surrey, Newcastle University, and Nottingham University can simulate cell division, cell migration, and cell death (apoptosis), and showed how slight changes in the performance of cell division and apoptosis induce development of cortical structures in neurodevelopmental disorders. Surrey's Roman Bauer said the goal is to create a comprehensive computational model of the cerebral cortex and its development, accounting for neuronal behavior and organization. Nottingham's Marcus Kaiser said, "The team's results showed that cell death plays an essential role in developing the brain, as it influences the thickness of the cortex's layers, variety, and layer cell density."
|
||||
636 | Scientists Create Next Generation of Living Robots | Computer scientists at the University of Vermont (UVM), working with Tufts University biologists, followed up on the development of self-healing biological machines from frog cells (Xenobots) by creating a new generation of Xenobots that self-assemble from individual cells, do not use muscle cells for movement, and are capable of recordable memory. The next-generation Xenobots outperformed the previous generation, and also were shown to support molecular memory and self-healing. Tufts' Doug Blackiston said, "This approach is helping us understand how cells communicate as they interact with one another during development, and how we might better control those interactions." | [] | [] | [] | scitechnews | None | None | None | None | Computer scientists at the University of Vermont (UVM), working with Tufts University biologists, followed up on the development of self-healing biological machines from frog cells (Xenobots) by creating a new generation of Xenobots that self-assemble from individual cells, do not use muscle cells for movement, and are capable of recordable memory. The next-generation Xenobots outperformed the previous generation, and also were shown to support molecular memory and self-healing. Tufts' Doug Blackiston said, "This approach is helping us understand how cells communicate as they interact with one another during development, and how we might better control those interactions."
|
||||
637 | MIT Study Finds 'Systematic' Labeling Errors in Popular AI Benchmark Datasets | The field of AI and machine learning is arguably built on the shoulders of a few hundred papers, many of which draw conclusions using data from a subset of public datasets. Large, labeled corpora have been critical to the success of AI in domains ranging from image classification to audio classification. That's because their annotations expose comprehensible patterns to machine learning algorithms, in effect telling machines what to look for in future datasets so they're able to make predictions.
But while labeled data is usually equated with ground truth, datasets can - and do - contain errors. The processes used to construct corpora often involve some degree of automatic annotation or crowdsourcing techniques that are inherently error-prone. This becomes especially problematic when these errors reach test sets, the subsets of datasets researchers use to compare progress and validate their findings. Labeling errors here could lead scientists to draw incorrect conclusions about which models perform best in the real world, potentially undermining the framework by which the community benchmarks machine learning systems.
A new paper and website published by researchers at MIT instill little confidence that popular test sets in machine learning are immune to labeling errors. In an analysis of 10 test sets from datasets that include ImageNet, an image database used to train countless computer vision algorithms, the coauthors found an average of 3.4% errors across all of the datasets. The quantities ranged from just over 2,900 errors in the ImageNet validation set to over 5 million errors in QuickDraw, a Google-maintained collection of 50 million drawings contributed by players of the game Quick, Draw!
The researchers say the mislabelings make benchmark results from the test sets unstable. For example, when ImageNet and another image dataset, CIFAR-10, were corrected for labeling errors, larger models performed worse than their lower-capacity counterparts. That's because the higher-capacity models reflected the distribution of labeling errors in their predictions to a greater degree than smaller models - an effect that increased with the prevalence of mislabeled test data.
In choosing which datasets to audit, the researchers looked at the most-used open source datasets created in the last 20 years, with a preference for diversity across computer vision, natural language processing, sentiment analysis, and audio modalities. In total, they evaluated six image datasets (MNIST, CIFAR-10, CIFAR-100, Caltech-256, and ImageNet), three text datasets (20news, IMDB, and Amazon Reviews), and one audio dataset (AudioSet).
The researchers estimate that QuickDraw had the highest percentage of errors in its test set, at 10.12% of the total labels. CIFAR was second, with around 5.85% incorrect labels, while ImageNet was close behind, with 5.83%. And 390,000 label errors make up roughly 4% of the Amazon Reviews dataset.
Errors included:
A previous study out of MIT found that ImageNet has "systematic annotation issues" and is misaligned with ground truth or direct observation when used as a benchmark dataset. The coauthors of that research concluded that about 20% of ImageNet photos contain multiple objects, leading to a drop in accuracy as high as 10% among models trained on the dataset.
In an experiment, the researchers filtered out the erroneous labels in ImageNet and benchmarked a number of models on the corrected set. The results were largely unchanged, but when the models were evaluated only on the erroneous data, those that performed best on the original, incorrect labels were found to perform the worst on the correct labels. The implication is that the models learned to capture systematic patterns of label error in order to improve their original test accuracy.
In a follow-up experiment, the coauthors created an error-free CIFAR-10 test set to measure AI models for "corrected" accuracy. The results show that powerful models didn't reliably perform better than their simpler counterparts because performance was correlated with the degree of labeling errors. For datasets where errors are common, data scientists might be misled to select a model that isn't actually the best model in terms of corrected accuracy, the study's coauthors say.
"Traditionally, machine learning practitioners choose which model to deploy based on test accuracy - our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets," the researchers wrote. "It is imperative to be cognizant of the distinction between corrected versus original test accuracy and to follow dataset curation practices that maximize high-quality test labels."
To promote more accurate benchmarks, the researchers have released a cleaned version of each test set in which a large portion of the label errors have been corrected. The team recommends that data scientists measure the real-world accuracy they care about in practice and consider using simpler models for datasets with error-prone labels, especially for algorithms trained or evaluated with noisy labeled data.
Creating datasets in a privacy-preserving, ethical way remains a major blocker for researchers in the AI community, particularly those who specialize in computer vision. In January 2019, IBM released a corpus designed to mitigate bias in facial recognition algorithms that contained nearly a million photos of people from Flickr. But IBM failed to notify either the photographers or the subjects of the photos that their work would be canvassed. Separately, an earlier version of ImageNet , a dataset used to train AI systems around the world, was found to contain photos of naked children, porn actresses, college parties, and more - all scraped from the web without those individuals' consent.
In July 2020, the creators of the 80 Million Tiny Images dataset from MIT and NYU took the collection offline, apologized, and asked other researchers to refrain from using the dataset and to delete any existing copies. Introduced in 2006 and containing photos scraped from internet search engines, 80 Million Tiny Images was found to have a range of racist, sexist, and otherwise offensive annotations, such as nearly 2,000 images labeled with the N-word, and labels like "rape suspect" and "child molester." The dataset also contained pornographic content like nonconsensual photos taken up women's skirts.
Biases in these datasets not uncommonly find their way into trained, commercially available AI systems. Back in 2015, a software engineer pointed out that the image recognition algorithms in Google Photos were labeling his Black friends as "gorillas." Nonprofit AlgorithmWatch showed Cloud Vision API automatically labeled a thermometer held by a dark-skinned person as a "gun" while labeling a thermometer held by a light-skinned person as an "electronic device." And benchmarks of major vendors' systems by the Gender Shades project and the National Institute of Standards and Technology (NIST) suggest facial recognition technology exhibits racial and gender bias and facial recognition programs can be wildly inaccurate, misclassifying people upwards of 96% of the time .
Some in the AI community are taking steps to build less problematic corpora. The ImageNet creators said they plan to remove virtually all of about 2,800 categories in the "person" subtree of the dataset, which were found to poorly represent people from the Global South . And this week, the group released a version of the dataset that blurs people's faces in order to support privacy experimentation. | An analysis by Massachusetts Institute of Technology (MIT) researchers demonstrated the susceptibility of popular open source artificial intelligence benchmark datasets to labeling errors. The team investigated 10 test sets from datasets, including the ImageNet database, to find an average of 3.4% errors across all datasets. The MIT investigators calculated that the Google-maintained QuickDraw database of 50 million drawings had the most errors in its test set, at 10.12% of all labels. The researchers said these mislabelings make the benchmark results from the test sets unstable. The authors wrote, "Traditionally, machine learning practitioners choose which model to deploy based on test accuracy - our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets." | [] | [] | [] | scitechnews | None | None | None | None | An analysis by Massachusetts Institute of Technology (MIT) researchers demonstrated the susceptibility of popular open source artificial intelligence benchmark datasets to labeling errors. The team investigated 10 test sets from datasets, including the ImageNet database, to find an average of 3.4% errors across all datasets. The MIT investigators calculated that the Google-maintained QuickDraw database of 50 million drawings had the most errors in its test set, at 10.12% of all labels. The researchers said these mislabelings make the benchmark results from the test sets unstable. The authors wrote, "Traditionally, machine learning practitioners choose which model to deploy based on test accuracy - our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets."
The field of AI and machine learning is arguably built on the shoulders of a few hundred papers, many of which draw conclusions using data from a subset of public datasets. Large, labeled corpora have been critical to the success of AI in domains ranging from image classification to audio classification. That's because their annotations expose comprehensible patterns to machine learning algorithms, in effect telling machines what to look for in future datasets so they're able to make predictions.
But while labeled data is usually equated with ground truth, datasets can - and do - contain errors. The processes used to construct corpora often involve some degree of automatic annotation or crowdsourcing techniques that are inherently error-prone. This becomes especially problematic when these errors reach test sets, the subsets of datasets researchers use to compare progress and validate their findings. Labeling errors here could lead scientists to draw incorrect conclusions about which models perform best in the real world, potentially undermining the framework by which the community benchmarks machine learning systems.
A new paper and website published by researchers at MIT instill little confidence that popular test sets in machine learning are immune to labeling errors. In an analysis of 10 test sets from datasets that include ImageNet, an image database used to train countless computer vision algorithms, the coauthors found an average of 3.4% errors across all of the datasets. The quantities ranged from just over 2,900 errors in the ImageNet validation set to over 5 million errors in QuickDraw, a Google-maintained collection of 50 million drawings contributed by players of the game Quick, Draw!
The researchers say the mislabelings make benchmark results from the test sets unstable. For example, when ImageNet and another image dataset, CIFAR-10, were corrected for labeling errors, larger models performed worse than their lower-capacity counterparts. That's because the higher-capacity models reflected the distribution of labeling errors in their predictions to a greater degree than smaller models - an effect that increased with the prevalence of mislabeled test data.
In choosing which datasets to audit, the researchers looked at the most-used open source datasets created in the last 20 years, with a preference for diversity across computer vision, natural language processing, sentiment analysis, and audio modalities. In total, they evaluated six image datasets (MNIST, CIFAR-10, CIFAR-100, Caltech-256, and ImageNet), three text datasets (20news, IMDB, and Amazon Reviews), and one audio dataset (AudioSet).
The researchers estimate that QuickDraw had the highest percentage of errors in its test set, at 10.12% of the total labels. CIFAR was second, with around 5.85% incorrect labels, while ImageNet was close behind, with 5.83%. And 390,000 label errors make up roughly 4% of the Amazon Reviews dataset.
Errors included:
A previous study out of MIT found that ImageNet has "systematic annotation issues" and is misaligned with ground truth or direct observation when used as a benchmark dataset. The coauthors of that research concluded that about 20% of ImageNet photos contain multiple objects, leading to a drop in accuracy as high as 10% among models trained on the dataset.
In an experiment, the researchers filtered out the erroneous labels in ImageNet and benchmarked a number of models on the corrected set. The results were largely unchanged, but when the models were evaluated only on the erroneous data, those that performed best on the original, incorrect labels were found to perform the worst on the correct labels. The implication is that the models learned to capture systematic patterns of label error in order to improve their original test accuracy.
In a follow-up experiment, the coauthors created an error-free CIFAR-10 test set to measure AI models for "corrected" accuracy. The results show that powerful models didn't reliably perform better than their simpler counterparts because performance was correlated with the degree of labeling errors. For datasets where errors are common, data scientists might be misled to select a model that isn't actually the best model in terms of corrected accuracy, the study's coauthors say.
"Traditionally, machine learning practitioners choose which model to deploy based on test accuracy - our findings advise caution here, proposing that judging models over correctly labeled test sets may be more useful, especially for noisy real-world datasets," the researchers wrote. "It is imperative to be cognizant of the distinction between corrected versus original test accuracy and to follow dataset curation practices that maximize high-quality test labels."
To promote more accurate benchmarks, the researchers have released a cleaned version of each test set in which a large portion of the label errors have been corrected. The team recommends that data scientists measure the real-world accuracy they care about in practice and consider using simpler models for datasets with error-prone labels, especially for algorithms trained or evaluated with noisy labeled data.
Creating datasets in a privacy-preserving, ethical way remains a major blocker for researchers in the AI community, particularly those who specialize in computer vision. In January 2019, IBM released a corpus designed to mitigate bias in facial recognition algorithms that contained nearly a million photos of people from Flickr. But IBM failed to notify either the photographers or the subjects of the photos that their work would be canvassed. Separately, an earlier version of ImageNet , a dataset used to train AI systems around the world, was found to contain photos of naked children, porn actresses, college parties, and more - all scraped from the web without those individuals' consent.
In July 2020, the creators of the 80 Million Tiny Images dataset from MIT and NYU took the collection offline, apologized, and asked other researchers to refrain from using the dataset and to delete any existing copies. Introduced in 2006 and containing photos scraped from internet search engines, 80 Million Tiny Images was found to have a range of racist, sexist, and otherwise offensive annotations, such as nearly 2,000 images labeled with the N-word, and labels like "rape suspect" and "child molester." The dataset also contained pornographic content like nonconsensual photos taken up women's skirts.
Biases in these datasets not uncommonly find their way into trained, commercially available AI systems. Back in 2015, a software engineer pointed out that the image recognition algorithms in Google Photos were labeling his Black friends as "gorillas." Nonprofit AlgorithmWatch showed Cloud Vision API automatically labeled a thermometer held by a dark-skinned person as a "gun" while labeling a thermometer held by a light-skinned person as an "electronic device." And benchmarks of major vendors' systems by the Gender Shades project and the National Institute of Standards and Technology (NIST) suggest facial recognition technology exhibits racial and gender bias and facial recognition programs can be wildly inaccurate, misclassifying people upwards of 96% of the time .
Some in the AI community are taking steps to build less problematic corpora. The ImageNet creators said they plan to remove virtually all of about 2,800 categories in the "person" subtree of the dataset, which were found to poorly represent people from the Global South . And this week, the group released a version of the dataset that blurs people's faces in order to support privacy experimentation. |
|||
638 | VR Brings Joy to People in Assisted-Living Facilities | Long-term care communities increasingly are using virtual reality (VR) devices and systems to improve residents' wellness and quality of life amid pandemic-related restrictions on visitors and activities. Studies have documented the positive effects of the technology, with a 2018 field study by researchers at the Massachusetts Institute of Technology finding that nearly 39% of assisted-living residents shown VR images related to travel and relaxation reported better perceived overall health. Companies like MyndVR sell VR packages to senior-care facilities, while others like Embodied Labs use VR to train caregivers. MyndVR's Paula Harder said, "Residents in our memory-care neighborhood have been observed to be more oriented to their surroundings...and even more coordinated in their speech and movement." | [] | [] | [] | scitechnews | None | None | None | None | Long-term care communities increasingly are using virtual reality (VR) devices and systems to improve residents' wellness and quality of life amid pandemic-related restrictions on visitors and activities. Studies have documented the positive effects of the technology, with a 2018 field study by researchers at the Massachusetts Institute of Technology finding that nearly 39% of assisted-living residents shown VR images related to travel and relaxation reported better perceived overall health. Companies like MyndVR sell VR packages to senior-care facilities, while others like Embodied Labs use VR to train caregivers. MyndVR's Paula Harder said, "Residents in our memory-care neighborhood have been observed to be more oriented to their surroundings...and even more coordinated in their speech and movement."
|
||||
639 | Australian Researchers Use ML to Analyze Rock Art | South Australian researchers have been working with the Mimal and Marrku traditional owners of the Wilton River area in Australia's Top End to analyse the evolution of rock art through machine learning.
The study, led by Flinders University archaeologist Dr Daryl Wesley, saw the group test different styles of rock art of human figures in Arnhem Land labelled "northern running figures," "dynamic figures," "post dynamic figures," and "simple figures with boomerangs" to understand how the styles relate to one another.
The team used machine learning to analyse images of rock art collected during surveys in Marrku country in 2018 and 2019.
The approach used previously trained and published convolutional neural network models and dataset combinations that were each designed and trained for object classification.
Co-authors of the findings published this week in Australian Archaeology , Flinders University PhD candidate in archaeology Jarrad Kowlessar and Dr Ian Moffat, told ZDNet the team then used transfer learning to deploy these networks on its dataset without retraining and then analysed the way the models responded or activated on a rock art dataset.
"This approach allows an unbiased classification of style as well as allowing us to make use of neural networks without access to a large sized dataset of rock art that would be required for training a model from scratch," Moffat said.
"Our analysis of the model activation was conducted using the 't-distributed stochastic neighbour embedding (t-SNE) ' technique which is a non-linear method for dimensionality reduction. This method helps make sense and interrogate why the models have activated in the ways that they have for the different data points."
The reconstructed rock art chronology uses existing datasets of more than 14 million different photos of animals such as dogs, cats, lizards, and insects to objects like chairs, tables, and cups.
"In total, the computer saw more than 1000 different types of objects and learned to tell the difference between them just by looking at photos of them," Wesley added.
"The important skill this computer developed was a mathematical model that has the ability to tell how similar two different images are to one another."
The methodology removed a large degree of individual human interpretation and possible bias by using a machine learning approach called transfer learning. This allowed the computer to understand how each style related to one another directly, independently of the researchers.
Moffat said the team began the research as it was excited by the possibility of using digital technologies to help understand Australia's stunning rock art record.
"In particular, while traditional rock art analysis is a great way to record motifs, the interpretation of these figures relies on the researcher using their own judgement to classify images on the basis of style," he said.
"Obviously the result of these classifications are heavily influenced by the researcher's previous experience and training. Our new transfer learning approach works without being influenced by the biases of researchers and so provides an entirely new lens through which to understand style."
Kowlessar said the algorithm ordered the styles in the same chronology that archaeologists have ordered them, by inspecting which appear on top of which.
"This shows that similarity and time are closely linked in the Arnhem Land rock art and that human figures drawn closer in time were more similar to one another than those drawn a long time apart," he explained.
"The exciting thing about our research is not that it is finding things that humans have missed but that it is replicating the results of other studies that have used a more traditional approach," Moffat told ZDNet. "This demonstrates that our approach is working and suggests it has exciting potential to contribute to rock art studies elsewhere."
He said the method used can be applied to a variety of stylistic identification.
"This is very well suited to research in art where 'style' is a dominant factor but also applicable to other materials," he said. "One such place we would like to continue to use this is in the analysis and identification of animal species in rock art." | South Australian researchers at Flinders University are analyzing the evolution of rock art via machine learning. The team studied images of art collected during surveys of the Arnhem Land region using previously trained and published convolutional neural network models and dataset combinations each designed for object classification. The Flinders investigators used transfer learning to deploy these networks on the dataset without retraining, and analyzed the models' response or activation on a rock art dataset. Flinders' Daryl Wesley said the computer observed over 1,000 different types of objects, and learned to differentiate them by looking at photos. Flinders' Ian Moffat said transfer learning removed a significant amount of human bias from the analysis, and an especially exciting aspect of this research is that "it is replicating the results of other studies that have used a more traditional approach." | [] | [] | [] | scitechnews | None | None | None | None | South Australian researchers at Flinders University are analyzing the evolution of rock art via machine learning. The team studied images of art collected during surveys of the Arnhem Land region using previously trained and published convolutional neural network models and dataset combinations each designed for object classification. The Flinders investigators used transfer learning to deploy these networks on the dataset without retraining, and analyzed the models' response or activation on a rock art dataset. Flinders' Daryl Wesley said the computer observed over 1,000 different types of objects, and learned to differentiate them by looking at photos. Flinders' Ian Moffat said transfer learning removed a significant amount of human bias from the analysis, and an especially exciting aspect of this research is that "it is replicating the results of other studies that have used a more traditional approach."
South Australian researchers have been working with the Mimal and Marrku traditional owners of the Wilton River area in Australia's Top End to analyse the evolution of rock art through machine learning.
The study, led by Flinders University archaeologist Dr Daryl Wesley, saw the group test different styles of rock art of human figures in Arnhem Land labelled "northern running figures," "dynamic figures," "post dynamic figures," and "simple figures with boomerangs" to understand how the styles relate to one another.
The team used machine learning to analyse images of rock art collected during surveys in Marrku country in 2018 and 2019.
The approach used previously trained and published convolutional neural network models and dataset combinations that were each designed and trained for object classification.
Co-authors of the findings published this week in Australian Archaeology , Flinders University PhD candidate in archaeology Jarrad Kowlessar and Dr Ian Moffat, told ZDNet the team then used transfer learning to deploy these networks on its dataset without retraining and then analysed the way the models responded or activated on a rock art dataset.
"This approach allows an unbiased classification of style as well as allowing us to make use of neural networks without access to a large sized dataset of rock art that would be required for training a model from scratch," Moffat said.
"Our analysis of the model activation was conducted using the 't-distributed stochastic neighbour embedding (t-SNE) ' technique which is a non-linear method for dimensionality reduction. This method helps make sense and interrogate why the models have activated in the ways that they have for the different data points."
The reconstructed rock art chronology uses existing datasets of more than 14 million different photos of animals such as dogs, cats, lizards, and insects to objects like chairs, tables, and cups.
"In total, the computer saw more than 1000 different types of objects and learned to tell the difference between them just by looking at photos of them," Wesley added.
"The important skill this computer developed was a mathematical model that has the ability to tell how similar two different images are to one another."
The methodology removed a large degree of individual human interpretation and possible bias by using a machine learning approach called transfer learning. This allowed the computer to understand how each style related to one another directly, independently of the researchers.
Moffat said the team began the research as it was excited by the possibility of using digital technologies to help understand Australia's stunning rock art record.
"In particular, while traditional rock art analysis is a great way to record motifs, the interpretation of these figures relies on the researcher using their own judgement to classify images on the basis of style," he said.
"Obviously the result of these classifications are heavily influenced by the researcher's previous experience and training. Our new transfer learning approach works without being influenced by the biases of researchers and so provides an entirely new lens through which to understand style."
Kowlessar said the algorithm ordered the styles in the same chronology that archaeologists have ordered them, by inspecting which appear on top of which.
"This shows that similarity and time are closely linked in the Arnhem Land rock art and that human figures drawn closer in time were more similar to one another than those drawn a long time apart," he explained.
"The exciting thing about our research is not that it is finding things that humans have missed but that it is replicating the results of other studies that have used a more traditional approach," Moffat told ZDNet. "This demonstrates that our approach is working and suggests it has exciting potential to contribute to rock art studies elsewhere."
He said the method used can be applied to a variety of stylistic identification.
"This is very well suited to research in art where 'style' is a dominant factor but also applicable to other materials," he said. "One such place we would like to continue to use this is in the analysis and identification of animal species in rock art." |
|||
640 | Smart Bandage Could Hasten Healing, Might Even Detect Covid | THE INSTITUTE Patients with an open wound, such as a bedsore or a foot ulcer, need to be checked frequently to see how well it is healing. That can require regular trips to a doctor's office. But patients might not have to make as many of those visits, thanks to a new smart bandage developed by IEEE Fellow Ravinder Dahiya and other researchers at the University of Glasgow . Dahiya is with the university's Bendable Electronics and Sensing Technologies group .
The flexible adhesive patch is 3 centimeters by 6 cm and can be used to apply pressure to help a wound heal. It is the first bandage to use sensors that simultaneously measure how much strain is being put on the skin and the patient's temperature, which can affect the healing process. The readings from the dressing can be sent to a health care provider via a smartphone app the researchers developed.
Monitoring wound healing isn't the only potential application. Dahiya says the bandage can be used to monitor breathing and even detect COVID-19 symptoms.
The research was published in the open-access paper " Smart Bandage With Wireless Strain and Temperature Sensors and Batteryless NFC Tag ," which can be downloaded from the IEEE Xplore Digital Library .
"Temperature and strain are two parameters that have hardly ever been combined for wound assessment," the researchers wrote.
The flexible adhesive patch is 3 centimeters by 6 cm and can be used to apply pressure to help a wound heal. Photo: BEST group/University of Glasgow
Various techniques are used to speed up the healing process, including using skin substitutes (for hard-to-heal wounds), as well as electroceuticals with piezoelectric materials-based dressings and negative-pressure therapy, which increase the flow of blood while keeping the wound moist.
"A compression bandage that applies just the right pressure could also hasten healing," Dahiya says. But figuring out the correct pressure and how to monitor body temperature was a challenge. Studies show that wound healing is best at a body temperature between 36° and 38° C.
The team's clear, adhesive bandage uses two types of sensors and a batteryless near-field communication (NFC) tag. One sensor monitors the patient's temperature while the other one checks how much strain is being put on the skin. Transparent polydimethylsiloxane was used to make the strain sensor. PDMS is the most widely used silicon-based organic polymer because of its versatility. The NFC tag transmits the data from the sensors wirelessly to the smartphone app.
The researchers found the strain sensor could determine the right amount of pressure for the compression bandage, and the temperature sensor could detect if the patient is spiking a fever and therefore might have an infection.
Dahiya says the smart bandage also can be used to check the lung functions of those with respiratory conditions such as asthma, as well as people on ventilators. When the patch is placed on the patient's chest, its strain sensor can detect erratic breathing. He has tested his theory with a mannequin on a ventilator, as seen in this video .
The patch could even be used to help detect coronavirus cases, he said, because two major COVID-19 symptoms are difficulty breathing and a fever. The smart app could immediately notify a health provider, speeding up testing and possibly stopping a sick patient from infecting others.
The bandage has been tested in the lab, and its technology readiness level (TRL) is about a 5, Dahiya says. The TRL system, which can be used to assess a technology's maturity, has levels that go up to 9. A 5 rating means the technology can be tested outside the lab.
"The smart bandage could be used by just about anybody, especially frontline workers," Dahiya says. "They are the people who need it most."
IEEE membership offers a wide range of benefits and opportunities for those who share a common interest in technology. If you are not already a member, consider joining IEEE and becoming part of a worldwide network of more than 400,000 students and professionals. | A smart bandage developed by researchers at Scotland's University of Glasgow could reduce the number of in-person doctor's visits required for patients with open wounds. The clear, flexible adhesive patch applies pressure to aid healing and uses sensors to measure the amount of strain on the skin and the patient's temperature. That data is transmitted wirelessly via a near-field communication tag to a smartphone app developed by the researchers, which can send the data to healthcare providers to determine whether the bandage is providing the correct amount of pressure, or whether the patient has a fever that could indicate an infection. The bandage also can monitor the lung function of patients with respiratory conditions or who are on a ventilator, and detect symptoms of Covid-19. Notifications from the app could help speed testing and prevent patients with Covid-19 from infecting others. | [] | [] | [] | scitechnews | None | None | None | None | A smart bandage developed by researchers at Scotland's University of Glasgow could reduce the number of in-person doctor's visits required for patients with open wounds. The clear, flexible adhesive patch applies pressure to aid healing and uses sensors to measure the amount of strain on the skin and the patient's temperature. That data is transmitted wirelessly via a near-field communication tag to a smartphone app developed by the researchers, which can send the data to healthcare providers to determine whether the bandage is providing the correct amount of pressure, or whether the patient has a fever that could indicate an infection. The bandage also can monitor the lung function of patients with respiratory conditions or who are on a ventilator, and detect symptoms of Covid-19. Notifications from the app could help speed testing and prevent patients with Covid-19 from infecting others.
THE INSTITUTE Patients with an open wound, such as a bedsore or a foot ulcer, need to be checked frequently to see how well it is healing. That can require regular trips to a doctor's office. But patients might not have to make as many of those visits, thanks to a new smart bandage developed by IEEE Fellow Ravinder Dahiya and other researchers at the University of Glasgow . Dahiya is with the university's Bendable Electronics and Sensing Technologies group .
The flexible adhesive patch is 3 centimeters by 6 cm and can be used to apply pressure to help a wound heal. It is the first bandage to use sensors that simultaneously measure how much strain is being put on the skin and the patient's temperature, which can affect the healing process. The readings from the dressing can be sent to a health care provider via a smartphone app the researchers developed.
Monitoring wound healing isn't the only potential application. Dahiya says the bandage can be used to monitor breathing and even detect COVID-19 symptoms.
The research was published in the open-access paper " Smart Bandage With Wireless Strain and Temperature Sensors and Batteryless NFC Tag ," which can be downloaded from the IEEE Xplore Digital Library .
"Temperature and strain are two parameters that have hardly ever been combined for wound assessment," the researchers wrote.
The flexible adhesive patch is 3 centimeters by 6 cm and can be used to apply pressure to help a wound heal. Photo: BEST group/University of Glasgow
Various techniques are used to speed up the healing process, including using skin substitutes (for hard-to-heal wounds), as well as electroceuticals with piezoelectric materials-based dressings and negative-pressure therapy, which increase the flow of blood while keeping the wound moist.
"A compression bandage that applies just the right pressure could also hasten healing," Dahiya says. But figuring out the correct pressure and how to monitor body temperature was a challenge. Studies show that wound healing is best at a body temperature between 36° and 38° C.
The team's clear, adhesive bandage uses two types of sensors and a batteryless near-field communication (NFC) tag. One sensor monitors the patient's temperature while the other one checks how much strain is being put on the skin. Transparent polydimethylsiloxane was used to make the strain sensor. PDMS is the most widely used silicon-based organic polymer because of its versatility. The NFC tag transmits the data from the sensors wirelessly to the smartphone app.
The researchers found the strain sensor could determine the right amount of pressure for the compression bandage, and the temperature sensor could detect if the patient is spiking a fever and therefore might have an infection.
Dahiya says the smart bandage also can be used to check the lung functions of those with respiratory conditions such as asthma, as well as people on ventilators. When the patch is placed on the patient's chest, its strain sensor can detect erratic breathing. He has tested his theory with a mannequin on a ventilator, as seen in this video .
The patch could even be used to help detect coronavirus cases, he said, because two major COVID-19 symptoms are difficulty breathing and a fever. The smart app could immediately notify a health provider, speeding up testing and possibly stopping a sick patient from infecting others.
The bandage has been tested in the lab, and its technology readiness level (TRL) is about a 5, Dahiya says. The TRL system, which can be used to assess a technology's maturity, has levels that go up to 9. A 5 rating means the technology can be tested outside the lab.
"The smart bandage could be used by just about anybody, especially frontline workers," Dahiya says. "They are the people who need it most."
IEEE membership offers a wide range of benefits and opportunities for those who share a common interest in technology. If you are not already a member, consider joining IEEE and becoming part of a worldwide network of more than 400,000 students and professionals. |
|||
642 | Threatened by Amazon, Albertsons Partners With Google to Digitalize the Grocery Shopping Experience | Just as Amazon expands its brick-and-mortar grocery store footprint, Google and grocer Albertsons Companies announced a multi-year partnership that would digitalize the grocery shopping experience for millions of American shoppers.
"Albertsons Cos. is continuing to transform into a modern retailer fit for the future, and we are leading the industry forward by providing the easiest and most exciting shopping experience for our customers," Chris Rupp, EVP and Chief Customer & Digital Officer of Albertsons Companies said in a statement. "In bringing together Google's technology expertise with our commitment to customer-centric innovation, we're providing our customers with a superior shopping experience no matter how they choose to shop with us."
The deal is significant for Google, as it looks to reach profitability in its growing cloud business, with growth largely driven by high-profile customers. Albertsons Companies is the #2 grocer in the U.S. by store count with 2,253 stores, behind Kroger's 2,750. Albertsons operates more than 20 different grocery brands, including Albertsons, Safeway, Vons and Jewel-Osco.
According to a release published by the companies, Albertsons and Vons began working together at the height of the pandemic, through a virtually-held joint innovation day. The companies looked for ways to improve services for grocery shoppers. At least one of those initiatives has already rolled out: a new tool that offers helpful information about online ordering from Albertsons' stores within mobile search.
Google hopes to integrate its Cloud AI technologies, including Vision AI, Recommendations AI and Business Messages, into the grocery chain's operations, in a big to create " the world's most predictive grocery engine." Albertsons and Google are also working on integrating Google Search and Maps to make it easier for grocery shoppers to find products within the store and integrate Google Pay into payment terminals.
Albertsons and Google have heavy competition from Amazon. The e-commerce giant recently started opening its own brick-and-mortar grocery stores under the "Amazon Fresh" banner. At least 28 additional Amazon Fresh stores are in the works, along with the 350 Whole Foods Market locations Amazon continues to operate. While Amazon's store count is still dwarfed by Albertsons, Amazon's massive cash reserves and an onslaught of vacant retail space make it a direct threat to the grocery giant.
Amazon has revolutionized the grocery shopping experience for its customers, offering expanded grocery delivery and curbside pickup services. Amazon Fresh stores even offer an Alexa-powered cart , which allows shoppers to skip the checkout line and view their saved grocery list from their Amazon account. Amazon Fresh stores also feature modified Echo Show tablets throughout the store that make it easier for shoppers to find the items they are looking for.
The companies plan to soon roll out the new shoppable maps and predictive grocery list building, but did not offer a timeline when they would be offered at the chain's 2,253 stores. Albertsons has already rolled out Google's AI-powered Business Messages in a limited capacity, offering shoppers information about COVID-19 vaccines. | A multi-year partnership between Google and Albertsons Companies, the No. 2 grocer in the U.S. by store count, aims to digitalize the grocery shopping experience at a time when competitor Amazon is expanding its brick-and-mortar grocery footprint via Amazon Fresh. The partnership would integrate Google's Cloud AI technologies into Albertsons' operations to create what Google called "the world's most predictive grocery engine." The move could help Google as it works to make its cloud business profitable and help Albertsons take on Amazon Fresh. Albertsons has launched Google's Business Messages in a limited capacity to provide shoppers with Covid-19 vaccine information, and plans to release the shoppable maps and predictive grocery-list-building capability soon. | [] | [] | [] | scitechnews | None | None | None | None | A multi-year partnership between Google and Albertsons Companies, the No. 2 grocer in the U.S. by store count, aims to digitalize the grocery shopping experience at a time when competitor Amazon is expanding its brick-and-mortar grocery footprint via Amazon Fresh. The partnership would integrate Google's Cloud AI technologies into Albertsons' operations to create what Google called "the world's most predictive grocery engine." The move could help Google as it works to make its cloud business profitable and help Albertsons take on Amazon Fresh. Albertsons has launched Google's Business Messages in a limited capacity to provide shoppers with Covid-19 vaccine information, and plans to release the shoppable maps and predictive grocery-list-building capability soon.
Just as Amazon expands its brick-and-mortar grocery store footprint, Google and grocer Albertsons Companies announced a multi-year partnership that would digitalize the grocery shopping experience for millions of American shoppers.
"Albertsons Cos. is continuing to transform into a modern retailer fit for the future, and we are leading the industry forward by providing the easiest and most exciting shopping experience for our customers," Chris Rupp, EVP and Chief Customer & Digital Officer of Albertsons Companies said in a statement. "In bringing together Google's technology expertise with our commitment to customer-centric innovation, we're providing our customers with a superior shopping experience no matter how they choose to shop with us."
The deal is significant for Google, as it looks to reach profitability in its growing cloud business, with growth largely driven by high-profile customers. Albertsons Companies is the #2 grocer in the U.S. by store count with 2,253 stores, behind Kroger's 2,750. Albertsons operates more than 20 different grocery brands, including Albertsons, Safeway, Vons and Jewel-Osco.
According to a release published by the companies, Albertsons and Vons began working together at the height of the pandemic, through a virtually-held joint innovation day. The companies looked for ways to improve services for grocery shoppers. At least one of those initiatives has already rolled out: a new tool that offers helpful information about online ordering from Albertsons' stores within mobile search.
Google hopes to integrate its Cloud AI technologies, including Vision AI, Recommendations AI and Business Messages, into the grocery chain's operations, in a big to create " the world's most predictive grocery engine." Albertsons and Google are also working on integrating Google Search and Maps to make it easier for grocery shoppers to find products within the store and integrate Google Pay into payment terminals.
Albertsons and Google have heavy competition from Amazon. The e-commerce giant recently started opening its own brick-and-mortar grocery stores under the "Amazon Fresh" banner. At least 28 additional Amazon Fresh stores are in the works, along with the 350 Whole Foods Market locations Amazon continues to operate. While Amazon's store count is still dwarfed by Albertsons, Amazon's massive cash reserves and an onslaught of vacant retail space make it a direct threat to the grocery giant.
Amazon has revolutionized the grocery shopping experience for its customers, offering expanded grocery delivery and curbside pickup services. Amazon Fresh stores even offer an Alexa-powered cart , which allows shoppers to skip the checkout line and view their saved grocery list from their Amazon account. Amazon Fresh stores also feature modified Echo Show tablets throughout the store that make it easier for shoppers to find the items they are looking for.
The companies plan to soon roll out the new shoppable maps and predictive grocery list building, but did not offer a timeline when they would be offered at the chain's 2,253 stores. Albertsons has already rolled out Google's AI-powered Business Messages in a limited capacity, offering shoppers information about COVID-19 vaccines. |
|||
646 | Flagging Coronavirus Misinformation Tweets Changes User Behaviors, Research Shows | When Twitter flags tweets containing coronavirus misinformation, that really does affect the degree of validity most people ascribe to those messages, says new research based on a novel branching survey by three professors at The University of Alabama in Huntsville (UAH), a part of the University of Alabama System.
America is dealing both with a pandemic and an infodemic, a term coined in a 2020 joint statement by the World Health Organization, the United Nations and other global health groups, says Dr. Candice Lanius, an assistant professor of communication arts and the first author on the paper.
Co-author researchers are Dr. William "Ivey" MacKenzie, an associate professor of management, and Dr. Ryan Weber, an associate professor of English.
"The infodemic draws attention to our unique contemporary circumstances, where there is a glut of information flowing through social media and traditional news media," says Dr. Lanius.
"Some people are naively sharing bad information, but there are also intentional bad actors sharing wrong information to further their own political or financial agendas," she says.
These bad actors often use robotic - or "bot" - accounts to rapidly share and like misinformation, hastening its spread.
"The infodemic is a global problem, just like the pandemic is a global problem," says Dr. Lanius. "Our research found that those who consume more news media, in particular right-leaning media, are more susceptible to misinformation in the context of the COVID-19 pandemic."
Why is that? While the researchers are unable to say definitively, they say that there are some possible explanations.
First, the media these survey respondents consume often relies on ideological and emotional appeals that work well for peripheral persuasion, where a follower decides whether to agree with the message based on cues other than the strength of its ideas or arguments.
A second possible explanation is that credible scientific information has been updated and improved over the past year as more empirical research has been done, the more skeptical people surveyed had a perception that the right-leaning media have been consistent in messaging while the Centers for Disease Control and other expert groups are changing their story.
Last, the survey found that one primer for COVID-19 skepticism is geography. According to the American Communities Project , many right-leaning news media consumers happen to be more rural than urban, so they did not have the firsthand experience with the pandemic that many urban populations faced in March 2020.
"Often, attempts to correct people's misperceptions actually cause them to dig in deeper to their false beliefs, a process that psychological researchers call 'the backfire effect,'" says Dr. Weber.
"But in this study, to our pleasant surprise, we found that flags worked," he says. "Flags indicating that a tweet came from a bot and that it may contain misinformation significantly lowered participants' perceptions that a tweet was credible, useful, accurate, relevant and interesting."
First, researchers asked the survey respondents their views of COVID-19 numbers. Did they feel there is underreporting, overreporting, accurate reporting, or did they not have an opinion?
"We were interested to see how people would respond to bots and flags that echoed their own views," says Dr. MacKenzie. "So, people who believe the numbers were underreported, see tweets that claim there is underreporting and people who believe in overreporting see tweets stating that overreporting is occurring."
Participants who believed the numbers are accurate or had no opinion were randomly assigned to either an over-or underreporting group. Surveying was done in real time, so as soon as the participant answered the first question about their view of COVID-19 numbers, they were automatically assigned to one of the two groups for the rest of the survey based on their response, Dr. MacKenzie says.
Dr. Weber says the researchers presented participants with two types of flags. The first told participants that the tweet came from a suspected bot account. The second told people that the tweet contained misinformation.
"These flags made people believe that the tweet was less credible, trustworthy, accurate, useful, relevant and interesting," Dr. Weber says. "People also expressed less willingness to engage the tweet by liking or sharing it after they saw each flag."
The order in which participants saw the flags wasn't randomized, so they always saw the flag about a bot account first.
"Therefore, we can't say whether the order of flags matters, or whether the misinformation flag is useful by itself," Dr. Weber says. "But we definitely saw that both flags in succession make people much more skeptical of bad tweets."
Flags also made most respondents say they were less likely to like or retweet the message or follow the account that created it - but not all.
"Some people showed more immunity to the flags than others," Dr. Weber says. "For instance, Fox News viewers and those who spent more time on social media were less affected by the flags than others."
The flags were also less effective at changing participants' minds about COVID-19 numbers overall, so even people who found the tweet less convincing after seeing the flags might not reexamine their opinion about COVID-19 death counts.
"However," Dr. Weber says, "some people did change their minds, most notably in the group that initially believed that COVID-19 numbers were overcounted."
People reported that they were more likely to seek out additional information from unflagged tweets than those that were flagged, Dr. MacKenzie says.
"As a whole, our research would suggest that individuals want to consume social media that is factual, and if mechanisms are in place to allow them to disregard false information, they will ignore it," Dr. MacKenzie says. "I think the most important takeaway from this research is that identifying misinformation and bot accounts will change social media users' behaviors." | University of Alabama in Huntsville (UAH) researchers found flagging tweets containing misinformation related to the coronavirus impacts their credibility among most Twitter users. UAH's Candice Lanius, William MacKenzie, and Ryan Weber surveyed 299 respondents using Amazon's Mechanical Turk on whether they felt Covid-19 numbers were underreported, overreported, accurate, or had no opinion; participants convinced of underreporting or overreporting were influenced by tweets claiming those respective views. When presented successive flags that tweets were either from a suspected bot account or contained misinformation, participants' skepticism increased. Said MacKenzie, "Our research would suggest that individuals want to consume social media that is factual, and if mechanisms are in place to allow them to disregard false information, they will ignore it." | [] | [] | [] | scitechnews | None | None | None | None | University of Alabama in Huntsville (UAH) researchers found flagging tweets containing misinformation related to the coronavirus impacts their credibility among most Twitter users. UAH's Candice Lanius, William MacKenzie, and Ryan Weber surveyed 299 respondents using Amazon's Mechanical Turk on whether they felt Covid-19 numbers were underreported, overreported, accurate, or had no opinion; participants convinced of underreporting or overreporting were influenced by tweets claiming those respective views. When presented successive flags that tweets were either from a suspected bot account or contained misinformation, participants' skepticism increased. Said MacKenzie, "Our research would suggest that individuals want to consume social media that is factual, and if mechanisms are in place to allow them to disregard false information, they will ignore it."
When Twitter flags tweets containing coronavirus misinformation, that really does affect the degree of validity most people ascribe to those messages, says new research based on a novel branching survey by three professors at The University of Alabama in Huntsville (UAH), a part of the University of Alabama System.
America is dealing both with a pandemic and an infodemic, a term coined in a 2020 joint statement by the World Health Organization, the United Nations and other global health groups, says Dr. Candice Lanius, an assistant professor of communication arts and the first author on the paper.
Co-author researchers are Dr. William "Ivey" MacKenzie, an associate professor of management, and Dr. Ryan Weber, an associate professor of English.
"The infodemic draws attention to our unique contemporary circumstances, where there is a glut of information flowing through social media and traditional news media," says Dr. Lanius.
"Some people are naively sharing bad information, but there are also intentional bad actors sharing wrong information to further their own political or financial agendas," she says.
These bad actors often use robotic - or "bot" - accounts to rapidly share and like misinformation, hastening its spread.
"The infodemic is a global problem, just like the pandemic is a global problem," says Dr. Lanius. "Our research found that those who consume more news media, in particular right-leaning media, are more susceptible to misinformation in the context of the COVID-19 pandemic."
Why is that? While the researchers are unable to say definitively, they say that there are some possible explanations.
First, the media these survey respondents consume often relies on ideological and emotional appeals that work well for peripheral persuasion, where a follower decides whether to agree with the message based on cues other than the strength of its ideas or arguments.
A second possible explanation is that credible scientific information has been updated and improved over the past year as more empirical research has been done, the more skeptical people surveyed had a perception that the right-leaning media have been consistent in messaging while the Centers for Disease Control and other expert groups are changing their story.
Last, the survey found that one primer for COVID-19 skepticism is geography. According to the American Communities Project , many right-leaning news media consumers happen to be more rural than urban, so they did not have the firsthand experience with the pandemic that many urban populations faced in March 2020.
"Often, attempts to correct people's misperceptions actually cause them to dig in deeper to their false beliefs, a process that psychological researchers call 'the backfire effect,'" says Dr. Weber.
"But in this study, to our pleasant surprise, we found that flags worked," he says. "Flags indicating that a tweet came from a bot and that it may contain misinformation significantly lowered participants' perceptions that a tweet was credible, useful, accurate, relevant and interesting."
First, researchers asked the survey respondents their views of COVID-19 numbers. Did they feel there is underreporting, overreporting, accurate reporting, or did they not have an opinion?
"We were interested to see how people would respond to bots and flags that echoed their own views," says Dr. MacKenzie. "So, people who believe the numbers were underreported, see tweets that claim there is underreporting and people who believe in overreporting see tweets stating that overreporting is occurring."
Participants who believed the numbers are accurate or had no opinion were randomly assigned to either an over-or underreporting group. Surveying was done in real time, so as soon as the participant answered the first question about their view of COVID-19 numbers, they were automatically assigned to one of the two groups for the rest of the survey based on their response, Dr. MacKenzie says.
Dr. Weber says the researchers presented participants with two types of flags. The first told participants that the tweet came from a suspected bot account. The second told people that the tweet contained misinformation.
"These flags made people believe that the tweet was less credible, trustworthy, accurate, useful, relevant and interesting," Dr. Weber says. "People also expressed less willingness to engage the tweet by liking or sharing it after they saw each flag."
The order in which participants saw the flags wasn't randomized, so they always saw the flag about a bot account first.
"Therefore, we can't say whether the order of flags matters, or whether the misinformation flag is useful by itself," Dr. Weber says. "But we definitely saw that both flags in succession make people much more skeptical of bad tweets."
Flags also made most respondents say they were less likely to like or retweet the message or follow the account that created it - but not all.
"Some people showed more immunity to the flags than others," Dr. Weber says. "For instance, Fox News viewers and those who spent more time on social media were less affected by the flags than others."
The flags were also less effective at changing participants' minds about COVID-19 numbers overall, so even people who found the tweet less convincing after seeing the flags might not reexamine their opinion about COVID-19 death counts.
"However," Dr. Weber says, "some people did change their minds, most notably in the group that initially believed that COVID-19 numbers were overcounted."
People reported that they were more likely to seek out additional information from unflagged tweets than those that were flagged, Dr. MacKenzie says.
"As a whole, our research would suggest that individuals want to consume social media that is factual, and if mechanisms are in place to allow them to disregard false information, they will ignore it," Dr. MacKenzie says. "I think the most important takeaway from this research is that identifying misinformation and bot accounts will change social media users' behaviors." |
|||
648 | Turing Award Goes to Creators of Computer Programming Building Blocks | When Alfred Aho and Jeffrey Ullman met while waiting in the registration line on their first day of graduate school at Princeton University in 1963, computer science was still a strange new world.
Using a computer required a set of esoteric skills typically reserved for trained engineers and mathematicians. But today, thanks in part to the work of Dr. Aho and Dr. Ullman, practically anyone can use a computer and program it to perform new tasks.
On Wednesday, the Association for Computing Machinery, the world's largest society of computing professionals, said Dr. Aho and Dr. Ullman would receive this year's Turing Award for their work on the fundamental concepts that underpin computer programming languages. Given since 1966 and often called the Nobel Prize of computing, the Turing Award comes with a $1 million prize, which the two academics and longtime friends will split.
Dr. Aho and Dr. Ullman helped refine one of the key components of a computer: the "compiler" that takes in software programs written by humans and turns them into something computers can understand . | ACM announced Jeffrey Ullman and Alfred Aho will be the recipients of this year's A.M. Turing Award for their work on the fundamental concepts that undergird computer programming languages. The scientists helped refine the compiler that efficiently translates human-written software programs into something computers can understand, and which today allows practically anyone to program computers to perform new tasks. Ullman and Aho also authored many textbooks, and taught generations of students as they distinguished software development from fields like electrical engineering or math. Columbia University's Krysta Svore said her work on quantum computers at Microsoft builds on Ullman and Aho's computing language concepts, as quantum systems require their own programming languages. | [] | [] | [] | scitechnews | None | None | None | None | ACM announced Jeffrey Ullman and Alfred Aho will be the recipients of this year's A.M. Turing Award for their work on the fundamental concepts that undergird computer programming languages. The scientists helped refine the compiler that efficiently translates human-written software programs into something computers can understand, and which today allows practically anyone to program computers to perform new tasks. Ullman and Aho also authored many textbooks, and taught generations of students as they distinguished software development from fields like electrical engineering or math. Columbia University's Krysta Svore said her work on quantum computers at Microsoft builds on Ullman and Aho's computing language concepts, as quantum systems require their own programming languages.
When Alfred Aho and Jeffrey Ullman met while waiting in the registration line on their first day of graduate school at Princeton University in 1963, computer science was still a strange new world.
Using a computer required a set of esoteric skills typically reserved for trained engineers and mathematicians. But today, thanks in part to the work of Dr. Aho and Dr. Ullman, practically anyone can use a computer and program it to perform new tasks.
On Wednesday, the Association for Computing Machinery, the world's largest society of computing professionals, said Dr. Aho and Dr. Ullman would receive this year's Turing Award for their work on the fundamental concepts that underpin computer programming languages. Given since 1966 and often called the Nobel Prize of computing, the Turing Award comes with a $1 million prize, which the two academics and longtime friends will split.
Dr. Aho and Dr. Ullman helped refine one of the key components of a computer: the "compiler" that takes in software programs written by humans and turns them into something computers can understand . |
|||
649 | U.S. Covid-19 Supercomputing Group Evaluates Year-Long Effort | "The consortium is proof we were able to act fast and act together," said
Dario Gil,
senior vice president and director of the research division of International Business Machines Corp., who helped create the consortium.
Announced in March of last year, the consortium has 43 members, including IBM , the national laboratories of the Department of Energy, Amazon.com Inc.'s Amazon Web Services, Microsoft Corp. , Intel Corp. , Alphabet Inc.'s Google Cloud and Nvidia Corp.
Collectively, the group helped researchers world-wide gain access to more than 600 petaflops of computing capacity, plus more than 6.8 million compute nodes, such as computer processor chips, memory and storage components, and over 50,000 graphics-processing units.
Among the nearly 100 approved projects was one in which researchers at Utah State University worked with the Texas Advanced Computing Center, part of the University of Texas at Austin, and others to model the way virus particles disperse in a room. The goal was to understand the distribution of Covid-19 virus particles in an enclosed space.
Researchers from the University of Tennessee, Knoxville worked with Google and Oak Ridge National Laboratory on another project to identify multiple already-approved drug compounds that could inhibit the coronavirus. Two of them are currently in clinical trials.
Members of the group reviewed more than 190 project proposals from academia, healthcare organizations and companies world-wide, approving 98. The projects were chosen based on scientific merit and need for computing capacity by representatives from the consortium with backgrounds in areas such as high-performance computing, biology and epidemiology.
The results of many of these studies were used to inform local and regional government officials, said
John Towns,
executive associate director of engagement at the National Center for Supercomputing Applications, part of the University of Illinois at Urbana-Champaign. "A number of these things were being used as supporting evidence for decision makers," said Mr. Towns, who is also a member of the consortium's executive committee.
The consortium is still accepting applications for projects.
The group is now advocating for a formal entity called the National Strategic Computing Reserve to accelerate the pace of scientific discovery in future times of crisis. The organization would enable access to software expertise, data and computing resources that can be used by researchers. Federal officials would have to enact a law to approve such an organization and grant it funding.
"Computing and data analysis will play an increasingly important role in addressing future national emergencies, whether they be pandemics or other events such as future pandemics, tornadoes, wildfires or nuclear disasters," said
Manish Parashar,
director of the office of advanced cyberinfrastructure at the National Science Foundation, and a member of the consortium's executive committee.
Write to Sara Castellanos at [email protected] | The Covid-19 High-Performance Computing Consortium gave researchers free access to the world's most powerful computers over the past year. Courtesy of the consortium - whose 43 members include the U.S. Department of Energy's national laboratories and technology companies like IBM, Amazon, Microsoft, and Google - researchers across the globe were given access to more than 600 petaflops of computing capacity, more than 6.8 million compute nodes, and more than 50,000 graphics-processing units. Members of the consortium recently spoke on the progress of their initiative and advocated for a formal organization in charge of making computing resources available in the event of future pandemics, hurricanes, oil spills, wildfires, and other natural disasters. "The consortium is proof we were able to act fast and act together," said IBM's Dario Gil, who helped create the consortium. | [] | [] | [] | scitechnews | None | None | None | None | The Covid-19 High-Performance Computing Consortium gave researchers free access to the world's most powerful computers over the past year. Courtesy of the consortium - whose 43 members include the U.S. Department of Energy's national laboratories and technology companies like IBM, Amazon, Microsoft, and Google - researchers across the globe were given access to more than 600 petaflops of computing capacity, more than 6.8 million compute nodes, and more than 50,000 graphics-processing units. Members of the consortium recently spoke on the progress of their initiative and advocated for a formal organization in charge of making computing resources available in the event of future pandemics, hurricanes, oil spills, wildfires, and other natural disasters. "The consortium is proof we were able to act fast and act together," said IBM's Dario Gil, who helped create the consortium.
"The consortium is proof we were able to act fast and act together," said
Dario Gil,
senior vice president and director of the research division of International Business Machines Corp., who helped create the consortium.
Announced in March of last year, the consortium has 43 members, including IBM , the national laboratories of the Department of Energy, Amazon.com Inc.'s Amazon Web Services, Microsoft Corp. , Intel Corp. , Alphabet Inc.'s Google Cloud and Nvidia Corp.
Collectively, the group helped researchers world-wide gain access to more than 600 petaflops of computing capacity, plus more than 6.8 million compute nodes, such as computer processor chips, memory and storage components, and over 50,000 graphics-processing units.
Among the nearly 100 approved projects was one in which researchers at Utah State University worked with the Texas Advanced Computing Center, part of the University of Texas at Austin, and others to model the way virus particles disperse in a room. The goal was to understand the distribution of Covid-19 virus particles in an enclosed space.
Researchers from the University of Tennessee, Knoxville worked with Google and Oak Ridge National Laboratory on another project to identify multiple already-approved drug compounds that could inhibit the coronavirus. Two of them are currently in clinical trials.
Members of the group reviewed more than 190 project proposals from academia, healthcare organizations and companies world-wide, approving 98. The projects were chosen based on scientific merit and need for computing capacity by representatives from the consortium with backgrounds in areas such as high-performance computing, biology and epidemiology.
The results of many of these studies were used to inform local and regional government officials, said
John Towns,
executive associate director of engagement at the National Center for Supercomputing Applications, part of the University of Illinois at Urbana-Champaign. "A number of these things were being used as supporting evidence for decision makers," said Mr. Towns, who is also a member of the consortium's executive committee.
The consortium is still accepting applications for projects.
The group is now advocating for a formal entity called the National Strategic Computing Reserve to accelerate the pace of scientific discovery in future times of crisis. The organization would enable access to software expertise, data and computing resources that can be used by researchers. Federal officials would have to enact a law to approve such an organization and grant it funding.
"Computing and data analysis will play an increasingly important role in addressing future national emergencies, whether they be pandemics or other events such as future pandemics, tornadoes, wildfires or nuclear disasters," said
Manish Parashar,
director of the office of advanced cyberinfrastructure at the National Science Foundation, and a member of the consortium's executive committee.
Write to Sara Castellanos at [email protected] |
|||
650 | Robot Lizard Can Quickly Climb a Wall, Just Like the Real Thing | By Ibrahim Sawal
This lizard-like robot can climb vertically Christofer Clemente
Consider the lizard. Those that climb need to be both fast and stable to avoid predation and find food. A robot made to mimic their movements has shown how the rotation of their legs and the speed at which they move up vertical surfaces helps them climb efficiently.
"Most lizards look a lot like other lizards," says Christofer Clemente at the University of the Sunshine Coast, Australia. To find out why, Clemente and his team built a robot based on a lizard's body to explore its efficiency. It is about 24 centimetres long, and its legs and feet were programmed to mimic the gait of climbing lizards.
They pitted the robot against common house geckos ( Hemidactylus frenatus ) and Australian water dragons ( Intellagama lesueurii ), filming them as they completed a vertical climbing test on a carpeted wall. "We thought, what if we could make a lizard take on any shape we wanted and see how it climbed," says Clemente.
The researchers found that the best way for both lizards and robots to increase the distance they climbed was to take a Goldilocks approach - not too fast and not too slow. When the robot climbed while moving at more than 70 per cent or less than 40 per cent of its maximum speed, it had a 50 per cent chance of falling. In the sweet spot between those speeds, it always stayed on the wall. The lizards climbed at 60 to 80 per cent of their maximum running speed to maintain their grip.
The robot had 100 per cent success at staying on the wall when its forelimbs were rotated outwards 20 degrees and its hind limbs 100 degrees. It also held fast to the wall when its limbs were rotated inwards at the same angles.
"It works equally as well if you rotate inwards or outwards, but we only see outward rotations in nature," says Clemente.
They also found that the robot could climb the furthest when it combined limb movements with a side-to-side spine motion. But the spine could only flex around 50 degrees before the limbs had to move as well to increase stability. Although it could also move by solely rotating its spine, the most efficient movement came from large amounts of limb movement and small spine movements.
Looking at the lizards' phylogenetic trees showed that ancient terrestrial tetrapod linages, such as salamanders , exclusively use rotations in their spine to move, but modern climbing lineages move their limbs to extend their reach more. "Evolution was following the same gradient as our robot, moving towards this optimum," says Clemente.
He says this shows that some lizards have found the optimum movements for climbing and that this could help build more advanced climbing machines. "If we want to build more efficient robots, the first place we should be looking is nature."
Journal reference: Proceedings of the Royal Society B , DOI: 10.1098/rspb.2020.2576
Sign up for Wild Wild Life , a free monthly newsletter celebrating the diversity and science of animals, plants and Earth's other weird and wonderful inhabitants | A robot built by researchers at Australia's University of the Sunshine Coast has legs and feet programmed to mimic the gait of climbing lizards. Tested against common house geckos and Australian water dragons in climbing up a carpeted wall, the robot had a 50% chance of falling when climbing at more than 70% or less than 40% of its maximum speed, and so maintained its grip by staying between those speeds. The robot held fast with total success with forelimbs rotated outwards 20 degrees and hind limbs 100 degrees, and when its limbs were rotated inwards at the same angles. Sunshine Coast's Christofer Clemente said the work proves that "if we want to build more efficient robots, the first place we should be looking is nature." | [] | [] | [] | scitechnews | None | None | None | None | A robot built by researchers at Australia's University of the Sunshine Coast has legs and feet programmed to mimic the gait of climbing lizards. Tested against common house geckos and Australian water dragons in climbing up a carpeted wall, the robot had a 50% chance of falling when climbing at more than 70% or less than 40% of its maximum speed, and so maintained its grip by staying between those speeds. The robot held fast with total success with forelimbs rotated outwards 20 degrees and hind limbs 100 degrees, and when its limbs were rotated inwards at the same angles. Sunshine Coast's Christofer Clemente said the work proves that "if we want to build more efficient robots, the first place we should be looking is nature."
By Ibrahim Sawal
This lizard-like robot can climb vertically Christofer Clemente
Consider the lizard. Those that climb need to be both fast and stable to avoid predation and find food. A robot made to mimic their movements has shown how the rotation of their legs and the speed at which they move up vertical surfaces helps them climb efficiently.
"Most lizards look a lot like other lizards," says Christofer Clemente at the University of the Sunshine Coast, Australia. To find out why, Clemente and his team built a robot based on a lizard's body to explore its efficiency. It is about 24 centimetres long, and its legs and feet were programmed to mimic the gait of climbing lizards.
They pitted the robot against common house geckos ( Hemidactylus frenatus ) and Australian water dragons ( Intellagama lesueurii ), filming them as they completed a vertical climbing test on a carpeted wall. "We thought, what if we could make a lizard take on any shape we wanted and see how it climbed," says Clemente.
The researchers found that the best way for both lizards and robots to increase the distance they climbed was to take a Goldilocks approach - not too fast and not too slow. When the robot climbed while moving at more than 70 per cent or less than 40 per cent of its maximum speed, it had a 50 per cent chance of falling. In the sweet spot between those speeds, it always stayed on the wall. The lizards climbed at 60 to 80 per cent of their maximum running speed to maintain their grip.
The robot had 100 per cent success at staying on the wall when its forelimbs were rotated outwards 20 degrees and its hind limbs 100 degrees. It also held fast to the wall when its limbs were rotated inwards at the same angles.
"It works equally as well if you rotate inwards or outwards, but we only see outward rotations in nature," says Clemente.
They also found that the robot could climb the furthest when it combined limb movements with a side-to-side spine motion. But the spine could only flex around 50 degrees before the limbs had to move as well to increase stability. Although it could also move by solely rotating its spine, the most efficient movement came from large amounts of limb movement and small spine movements.
Looking at the lizards' phylogenetic trees showed that ancient terrestrial tetrapod linages, such as salamanders , exclusively use rotations in their spine to move, but modern climbing lineages move their limbs to extend their reach more. "Evolution was following the same gradient as our robot, moving towards this optimum," says Clemente.
He says this shows that some lizards have found the optimum movements for climbing and that this could help build more advanced climbing machines. "If we want to build more efficient robots, the first place we should be looking is nature."
Journal reference: Proceedings of the Royal Society B , DOI: 10.1098/rspb.2020.2576
Sign up for Wild Wild Life , a free monthly newsletter celebrating the diversity and science of animals, plants and Earth's other weird and wonderful inhabitants |