This Tuesday, President Obama awarded Margaret Hamilton a Presidential Medal of Freedom; the highest civilian honor in the US. Hamilton, now 80 years old, was awarded for her incredible accomplishments in working on space mission Apollo 11 that landed the first man on the moon.
Hamilton began studying math at Earlham College and subsequently took a programming job at MIT, where she found her passion for Computer Science. She initially assisted with a missile defense system, until MIT received the request to begin work on software for the Apollo 11 mission. She then focused her team on creating programs that would alert the astronauts if the computer processors became overloaded, and to respond to this situation by prioritizing different tasks in an attempt to sustain functionality of the aircraft in crisis and, ideally, save those on-board. This software came in quite handy when the alarm sounded minutes before landing. Instead of aborting the mission completely at the sound of the alarm, the software prioritized the most critical tasks and the astronauts were able to safely land the vessel. As the President eloquently stated when bestowing this award upon her, "Our astronauts didn't have much time, but thankfully they had Margaret Hamilton."
While Hamilton was largely recognized for her work making this space mission possible, she went on to make other revolutionary software that has enabled the use of technology as we know it. For example, she and her team at MIT went on to write code that created the framework for the first portable computer. Hamilton represents one of many women during this time that have often previously been forgotten in their contributions to Computer Science, as they were regularly labelled as "number crunchers" rather than software engineers.
In a collaboration between MIT, Adobe, the University of California at Berkeley, the University of Toronto, Texas A&M, and the University of Texas, researchers have developed a new programming language that improves the way simulation programs are designed. This new language, named Simit, is designed to cut the length of code required to produce a simulation while sustaining the desired level of complexity. In the graphics community, a persistent problem is finding balance between complexity and efficiency. For instance, in a research program that models a ball bouncing against obstacles, this trade-off might mean that instead of an advanced collision response algorithm, a simple reciprocal velocity model is implemented. While the collision response algorithm may make the simulation more complex and realistic, using simple reciprocal velocity will allow the simulation to run at a faster speed.
These researchers are attempting to make this compromising issue nonexistent by only requiring the programmer to lay out the "translation between the graphical description of the system and the matrix description" once, and then allowing them to use the "language of linear algebra" to program the simulation further. In this way, Simit only reads instructions from linear algebra to the language of graphs instead of complex graphs to matrices and vice versa. (1)
Another major challenge when looking at graphics programming is that each computer handles it differently. One model may run the simulation at unacceptably slow speeds, while another computer may run it at a perfect 60fps. Additionally, the simulation may have to be written in different languages to accommodate different systems. By creating a language specifically meant for graphics, Simit reduces this extra work when moving between systems. When Simit-run simulations were tested against the same simulation in different languages, Simit consistently performed better than its opponent - in fact, between 4 and 20 times as fast.
Parkinson's disease is a very serious, incurable medical condition that affects a persons motor skills. It typically begins with a tremor, stiffness, and slow movement of the hands, and may progress to very serious disability and dementia. Though there is currently no cure for Parkinson's, early detection can lead to a better chance of stalling the symptoms with early drug and therapy treatment.
Researchers at MIT have designed software that can help detect these symptoms early on, and can even monitor the progression of the disease as the person uses their device from day to day. These symptoms have been difficult to measure quantitatively, which makes it more difficult for doctors to accurately treat their patients without close, lengthy observation. The software works by measuring the time taken to press and release a key- which for an average healthy individual is usually consistently around 100 milliseconds- and analyzes that data to decide if the user is taking an abnormal amount of time pressing the keys or has a large fluctuation in times. This, in theory, may indicate stiffness or slow movement of hands which are key signs of Parkinson's. More often than not, this software would be used on already-diagnosed individuals to monitor the progression of their disease. If the data seems to indicate a fluctuation of symptoms, doctors may be able to use this information to adjust treatment plans. The software was tested with both healthy individuals (the control group) and those with an early stage of Parkinson's. The results showed that, as hypothesized, the healthy individuals were consistent with time taken to press a key while those with Parkinson's had varied data.
4 million people worldwide suffer from this disease, which means that this new method of early detection is incredibly sought after and vital to improving the quality of life for so many. With quantitative data available to neurologists treating Parkinson's patients, they may be able to more accurately treat their symptoms and collect research that may lead to an improved medication or even a cure.
From unlocking your tech devices to automagically accessing your account on an ATM, facial recognition technology is becoming more and more prevalent by the day. Many tech companies are looking for ways to substitute the common written or typed character password with a more intuitive and personal form of identification. I looked at one of these methods in a previous blog post about fingerprint scanning (you can see more about this here).
Facial recognition as an alternative to character passwords is certainly the most common use in every-day life, but this technology is also used heavily in law enforcement. For instance, video capturing technology may be used to record the faces of those involved in criminal acts. Stills can be taken from the video, which can then be resized and interpreted as a 3D model of their physical features. Computer algorithms are used to determine many different quantitative aspects of the 3D model, such as distance between a persons eyes, the length of their nose, the shape of their facial features, and so many more. The more aspects taken into consideration means better accuracy when identifying the person in question. Once these calculations are run, the computer can compare this data with existing points in the system and ultimately decide whether an arbitrary closeness threshold is met.
This form of technology is not without its downfalls. Many consider it to be an invasion of privacy, as there is the possibility that the data collected could be used without the consent of the person photographed. The tech news site WeLiveSecurity reports that in a recent study, it was shown that around 75% of US consumers would not visit a store that openly used facial recognition technology for marketing purposes. This data shows that while facial recognition technology is making great leaps in innovation, the views of the general public may limit its use due to privacy concerns.
Hackathons are put on across the world for people interested in technology to come together and build unique projects in a short period of time. These events are made to connect people with similar interests through teamwork and competition, and largely deal with Computer Science and coding. Participants brainstorm ideas and spend anywhere from 24-48 hours constructing their idea into a presentable product at a common location. Locations can be found in cities across the globe and are independently proposed and then organized through the MLH or Major League Hacking organization. Through grants and MLH organizers, the chosen location is set up with the best equipment to achieve any project idea, and with all of the necessary amenities for overnight stays.
MLH aims to give people with interest in technology a place to come together, produce ideas, and participate in a unique collaborative computer programming project. With such a small amount of time to achieve a final product, many skip sleep and work nonstop. This has led to the criticism that these events encourage poorly-written/sloppy code and unhealthy sleeping and eating habits. Regardless, the final products of some of these hackathons produce truly innovative things that some use on a daily basis. GroupMe, for example, is a product of a hackathon project.
The skills that students learn at these events are invaluable to careers in Computer Science. At many large companies, particularly Google, being able to work collaboratively and express your ideas to a team is vital to the job. Hackathons expose prospective programmers to their future career environment while allowing them to control the entirety of their project.
If you're looking for a way to get involved, check out the MLH website for a list of hackathons near you! The only requirement is an interest in technology and a willingness to work in a collaborative environment.
Distributed Denial of Service (DDoS) is a common cyber attack used to essentially take a server, website, or device offline. The resources that enable this kind of attack are, unfortunately, widely available and therefore it is also widely used. Small-scale attacks may require only very basic knowledge of computers, and botnets can be bought for relatively small amounts of money on online "blackmarkets". Although this is illegal, these attacks are extremely difficult to trace as the information is sent through multiple computers (hence Distributed) and can be hard to discern from regular network activity.
How Does it Work?
The source of the attack begins by creating botnets -- a network of computers that act as agents of the attack, not necessarily owned by the source, which can be taken over and controlled remotely without the device owners knowledge. Once established, the botnets simultaneously attack the target in one of several ways; sending more connection requests than a server can handle or sending a bunch of random data to use up the targets bandwidth (how much data can be transferred from point A to point B in a set amount of time). This renders the targets' connection slow or completely cut off for however long the attack is active.
Prevention
By installing proper antivirus software and/or using a firewall, you can prevent your computer from being used as an agent of an attack. One technique that popular sites and services might use to lower the chances of being subjected to an attack is bandwidth oversubscription, which makes it more difficult for the source to grow their botnet network large enough to successfully overwhelm the target. DDoS mitigation is also common, and attempts to monitor the amount of information being sent/received from each source and tunes out the "noise" that is the random data sent in some attacks. Overall, you probably won't be a victim of a DDoS attack unless you have a controversial website or piss off the wrong guy in a PvP match.
There are so many more components to and forms of DDoS attacks -- if you're interested, check out these sources!
With rising numbers of self-driving cars in production, reports of their failure are reported frequently and are met with concerns about the safety of these vehicles. Tesla boasts about the auto-pilot capability of some of their models, and while the general public may consider "auto-pilot" to mean a completely hands-off experience, Tesla insists that the driver never remove their hands from the steering wheel. Just as airline pilots are still required to man planes equipped with auto-pilot, Tesla acknowledges that the technology they have implemented cannot operate completely without human intervention without risking the safety of everyone on the road. A few particular incidents, although largely isolated in comparison to regular, manually-driven model accidents, outline the bugs that self-driven cars are facing now.
The most talked-about crash in recent news was regarding the first death by a partially self-driven car, the Tesla Model S. The following model explains the basic circumstances of the crash:
The cause of the crash, which occurred while the car was in Autopilot mode, is assumed to be a failure of the obstacle-detection system. A camera and radar system interact with each other to determine obstacles around the car. The camera may have missed the truck due to its bright white exterior, too similar to the brightness of the sky that day to differentiate the two. As the CEO of Tesla explained shortly after, the radar system may have missed the obstacle due to the high ride height of the trailer - confusing it with a road sign that it is trained to ignore to prevent false braking events. This fatal combination of bugs, as well as the theory suggesting that the driver was not alert to the road conditions as the car tells you to be at all times no matter the mode setting, led to the first death by self-driven car.
A second popularly reported crash occurred with a Google self-driven car, and thankfully did not result in injury. The following video shows footage of the crash:
Google explains in its statement that the car detected sandbags near the right side of the road while planning to turn right, waited for cars to pass, saw the truck and assumed it would yield to it to allow the car to maneuver around the sandbags, and subsequently moved directly into the side of the bus. Google also explains that these assumptions happen all the time in regular human-directed driving, which brings us to many important questions going forward in autonomous vehicle production: do we want these self-driven cars to act like human drivers, or do we want them to exhibit perfection? Should these assumptions be made by the vehicle, or should this aspect of driving be put aside in favor of absolute safety?
Pertinence to Computer Science
The routines performed by these self-driving cars are laid out purely in computer programming. Millions upon millions of lines of code enable the car to guide the driver to their chosen destination, with ideally little to no interference on their part. With programming comes bugs, as we've seen through the very simple projects that we've done. However, when these programs hold human lives in the balance, there is no room for error. Before fully self-driven cars can be deployed to the public, extensive bug testing will have to occur and many new regulations met. Any mistake can prove, quite literally, to be fatal.
In computer science courses, instructors stress that all conditions be considered and thorough bug testing be done. The incidents explained above show why thoroughness is such an important concept in this field, and how no matter how tedious it can be to consider every possible input/outcome, it will certainly pay off when you have a flawless finished product (that doesn't kill anyone).