Monday, August 12, 2019

TECHNOLOGY: Technology Brings Rugged Iditarod Race to Global Audience

Via the Associated Press:
Alaska's Iditarod Trail Sled Dog Race uses technology so organizers and fans worldwide can monitor the sport in real time. This year's race has 51 contenders traveling between remote village checkpoints across the 1,000-mile (1,600-km) route, tracked electronically by operators in Anchorage hotels. Volunteers and race contractors monitor the dog teams via sleds outfitted with global-positioning system (GPS) trackers, which let fans follow them online while organizers ensure no one is missing. Some operators function as aircraft dispatchers for pilots who ferry supplies, as well as competitors and dogs that drop out; others process live video streamed from checkpoints, using satellite dishes. Still others oversee race-standing updates broadcast through equipment first tested last year, making it possible to activate a super-size hot spot in the most remote locations with satellite links.

Friday, July 12, 2019

TECHNOLOGY: NSA Makes Ghidra, a Powerful Cybersecurity Tool, Open Source

Via Wired magazine
The U.S. National Security Agency (NSA) has chosen to open source the cybersecurity tool Ghidra, a reverse-engineering platform that takes "compiled," deployed software and "decompiles" it. Reverse engineering allows malware analysts and threat intelligence researchers to work backward from software discovered in the wild to understand how it works, what its capabilities are, and who wrote it. Said NSA cybersecurity advisor Rob Joyce, Ghidra was "built for our internal use at NSA" and "helped us address some things in our work flow." Joyce noted that the NSA views the release of Ghidra as a recruiting strategy, allowing new hires to enter the agency at a higher level or contractors to provide expertise without having to first come up to speed on the tool. Added Dave Aitel, a former NSA researcher who is now chief security technology officer at Cyxtera, "Malware authors already know how to make it annoying to reverse their code. There's really no downside [to releasing Ghidra]."

Wednesday, June 12, 2019

INNOVATION: Self-Driving Cars Risk 'Future Errors' Due to Difficulty Detecting Darker Skin Tones

 Via the Washington Times:
Researchers at the Georgia Institute of Technology (Georgia Tech) have found that state-of-the-art object-detection systems, such as the sensors and cameras used in self-driving cars, are better at detecting people with lighter skin tones, meaning they are less likely to identify black people and to stop before crashing into them. The researchers examined eight image recognition systems and found the bias in each one, with accuracy 5% lower on average for people with darker skin. The team proved the hypothesis by dividing a large pool of pedestrian images into groups of lighter and darker skin using the Fitzpatrick scale—a scientific way of classifying skin color. “This behavior suggests that future errors made by autonomous vehicles may not be evenly distributed across different demographic groups,” the researchers wrote.

Sunday, May 12, 2019

INNOVATION: NYPD Says Its New Software Is Helping Analysts Track Crime Patterns More Quickly

Via the Los Angeles Times:

The New York Police Department (NYPD) is using pattern-recognition software so analysts can compare robberies, larcenies, and thefts to hundreds of thousands of crimes logged in the department's database, finding matches faster than they would manually. The Patternizr algorithm was launched in December 2016, and NYPD assistant commissioner of data analytics Evan Levine said, "The more easily that we can identify patterns in...crimes, the more quickly we can identify and apprehend perpetrators." Levine and co-developer Alex Chohlas-Wood trained Patternizr on 10 years of patterns that the department had manually identified. Patternizr accurately reproduced old crime patterns a third of the time, and matched parts of patterns 80% of the time. The software compares factors like method of entry, type of goods stolen, and distance between crimes, and reduces possible racial bias by not counting the race of suspects when looking for patterns.

Wednesday, April 10, 2019

TECHNOLOGY: When Passion for Videogames Helps Land That Job

Via the Wall Street Journal:
Employers across a spectrum of industries are welcoming applicants with experience in making or playing videogames, believing such backgrounds can help workers with online collaboration, problem-solving, and other key workplace skills. For example, General Electric (GE) is hiring people with game development expertise to train robots to inspect hazardous areas via virtual reality technology, a role that GE's Ratnadeep Paul said "came out of the gaming industry." Although some people still regard gamers as socially maladroit, in recent years that assumption has been dispelled, partly due to increasingly popular online multiplayer games that encourage players to form teams and strategize via online text or voice communication. Said the Rochester Institute of Technology's Andrew Phelps, "What we used to stereotypically think of as a weird thing some folks did in their basement is now part of everyday life. Gaming has become a common touch point for people."

Sunday, March 10, 2019

TECHNOLOGY: India Fights Diabetic Blindness With Help From AI

Via New York Times.com:
The Aravind Eye Hospital in Madurai, India, is working with Google artificial intelligence (AI) scientists to automate the identification of diabetic retinopathy. The hospital is using the new AI system to screen patients, with plans to deploy the technology in surrounding villages where eye doctors are scarce. The system is based on a neural network analyzing millions of retinal scans indicating diabetic blindness so it can learn to identify the disease on its own. The Aravind installation employs wall-mounted computer screens in waiting rooms to translate information into the various languages spoken by patients; the system's performance reportedly equals that of trained ophthalmologists. However, Luke Oakden-Rayner, director of medical imaging research at the Royal Adelaide Hospital in Australia, warned, “On paper, the Google system performs very well, but when you roll it out to a huge population, there can be problems that do not show up for years.”

Tuesday, February 12, 2019

INNOVATION: Computer-Designed Vaccine Elicits Potent Antibodies to RSV

Via UW Medicine:
International researchers have computer-designed a nanoparticle vaccine candidate for respiratory syncytial virus (RSV), an infection caught by nearly all children under three, which is the leading cause of pneumonia in babies under a year old in the U.S. Computationally-designed protein nanoparticles enable significantly greater control over key vaccine properties, including overall size, stability, and the number of antigens presented to the immune system. University of Washington (UW) researchers said the vaccine based on the DS-Cav1 protein yielded 10 times more potency than DS-Cav1 alone. UW's Neil King said, "We believe that computationally-designed nanoparticle vaccines will ultimately be simpler to manufacture and more effective than traditional vaccines. We will continue to develop this technology so that we and others can make new vaccines better, cheaper, and faster."

Thursday, January 03, 2019

TECHNOLOGY: Google Wins U.S. Approval for Radar-Based Hand Motion Sensor

Via Reuters:
U.S. regulators have approved Google's deployment of a radar-based motion sensor, granting it a waiver to use the device at higher power levels than currently permitted. The U.S. Federal Communications Commission (FCC) said the Project Soli device "will serve the public interest by providing for innovative device control features using touchless hand gesture technology." According to the FCC, the sensor captures motion in a three-dimensional space using a radar beam to facilitate touchless control of functions or features that can benefit users with mobility or speech impediments. Google said the sensor enables users to press an invisible button between the thumb and index fingers, or a virtual dial that turns by rubbing the thumb against the index finger. Said Google, "Even though these controls are virtual, the interactions feel physical and responsive" as feedback is produced by the haptic sensation of fingers touching.

Thursday, August 11, 2016

INNOVATION: World’s First Parallel Computer Based on Biomolecular Motor

And now, news from Germany.

A new parallel-computing approach can solve combinatorial problems, according to a study published in Proceedings of the National Academy of Sciences. Researchers from the Max Planck Institute of Molecular Cell Biology and Genetics and the Dresden University of Technology collaborated with an international team on the technology. The researchers note significant advances have been made in conventional electronic computers in the past decades, but their sequential nature prevents them from solving problems of a combinatorial nature. The number of calculations required to solve such problems grows with the size of the problem, making them intractable for sequential computing. The new approach addresses these issues by combining well-established nanofabrication technology with molecular motors that are very energy-efficient and inherently work in parallel. The researchers demonstrated the parallel-computing approach on a benchmark combinatorial problem that is very difficult to solve with sequential computers. The team says the approach is scalable, error-tolerant, and dramatically improves the time to solve combinatorial problems of size N. The problem to be solved is "encoded" within a network of nanoscale channels by both mathematically designing a geometrical network that is capable of representing the problem, and by fabricating a physical network based on this design using lithography. The network is then explored in parallel by many protein filaments self-propelled by a molecular layer of motor proteins covering the bottom of the channels.

Saturday, July 02, 2016

INNOVATION: Computers read 1.8 billion words of fiction to learn how to anticipate human behaviour

Meanwhile at Stanford:

Researchers at Stanford University are using 600,000 fictional stories to inform their new knowledge base called Augur. The team considers the approach to be an easier, more affordable, and more effective way to train computers to understand and anticipate human behavior. Augur is designed to power vector machines in making predictions about what an individual user might be about to do, or want to do next. The system's current success rate is 71 percent for unsupervised predictions of what a user will do next, and 96 percent for recall, or identification of human events. The researchers report dramatic stories can introduce comical errors into a machine-based prediction system. "While we tend to think about stories in terms of the dramatic and unusual events that shape their plots, stories are also filled with prosaic information about how we navigate and react to our everyday surroundings," they say. The researchers note artificial intelligence will need to put scenes and objects into an appropriate context. They say crowdsourcing or similar user-feedback systems will likely be needed to amend some of the more dramatic associations certain objects or situations might inspire.