Wednesday, November 23, 2016

Margaret Hamilton

     This Tuesday, President Obama awarded Margaret Hamilton a Presidential Medal of Freedom; the highest civilian honor in the US. Hamilton, now 80 years old, was awarded for her incredible accomplishments in working on space mission Apollo 11 that landed the first man on the moon.


     Hamilton began studying math at Earlham College and subsequently took a programming job at MIT, where she found her passion for Computer Science. She initially assisted with a missile defense system, until MIT received the request to begin work on software for the Apollo 11 mission. She then focused her team on creating programs that would alert the astronauts if the computer processors became overloaded, and to respond to this situation by prioritizing different tasks in an attempt to sustain functionality of the aircraft in crisis and, ideally, save those on-board. This software came in quite handy when the alarm sounded minutes before landing. Instead of aborting the mission completely at the sound of the alarm, the software prioritized the most critical tasks and the astronauts were able to safely land the vessel. As the President eloquently stated when bestowing this award upon her, "Our astronauts didn't have much time, but thankfully they had Margaret Hamilton."


     While Hamilton was largely recognized for her work making this space mission possible, she went on to make other revolutionary software that has enabled the use of technology as we know it. For example, she and her team at MIT went on to write code that created the framework for the first portable computer. Hamilton represents one of many women during this time that have often previously been forgotten in their contributions to Computer Science, as they were regularly labelled as "number crunchers" rather than software engineers.


Sources:
- https://www.bostonglobe.com/metro/2016/11/22/her-mission-was-space-and-software-and-now-she-has-presidential-medal-freedom/fr6Lzx4DPjY4HvKF0ufvhN/story.html
- http://www.bbc.com/news/world-us-canada-38076123

Tuesday, November 15, 2016

New Language for Programming Efficient Simulations

    In a collaboration between MIT, Adobe, the University of California at Berkeley, the University of Toronto, Texas A&M, and the University of Texas, researchers have developed a new programming language that improves the way simulation programs are designed. This new language, named Simit, is designed to cut the length of code required to produce a simulation while sustaining the desired level of complexity. In the graphics community, a persistent problem is finding balance between complexity and efficiency. For instance, in a research program that models a ball bouncing against obstacles, this trade-off might mean that instead of an advanced collision response algorithm, a simple reciprocal velocity model is implemented. While the collision response algorithm may make the simulation more complex and realistic, using simple reciprocal velocity will allow the simulation to run at a faster speed.

simit programming language simulation


     These researchers are attempting to make this compromising issue nonexistent by only requiring the programmer to lay out the "translation between the graphical description of the system and the matrix description" once, and then allowing them to use the "language of linear algebra" to program the simulation further. In this way, Simit only reads instructions from linear algebra to the language of graphs instead of complex graphs to matrices and vice versa. (1)

     Another major challenge when looking at graphics programming is that each computer handles it differently. One model may run the simulation at unacceptably slow speeds, while another computer may run it at a perfect 60fps. Additionally, the simulation may have to be written in different languages to accommodate different systems. By creating a language specifically meant for graphics, Simit reduces this extra work when moving between systems. When Simit-run simulations were tested against the same simulation in different languages, Simit consistently performed better than its opponent - in fact, between 4 and 20 times as fast.

Sources:
1. https://www.eecs.mit.edu/news-events/media/user-friendly-language-programming-efficient-simulations
Images:
1. https://fossbytes.com/simit-new-programming-language-fast-computer-simulations/

Wednesday, November 9, 2016

Monitoring the Progression Parkinson's Disease with Computer Software

     Parkinson's disease is a very serious, incurable medical condition that affects a persons motor skills. It typically begins with a tremor, stiffness, and slow movement of the hands, and may progress to very serious disability and dementia. Though there is currently no cure for Parkinson's, early detection can lead to a better chance of stalling the symptoms with early drug and therapy treatment.


     Researchers at MIT have designed software that can help detect these symptoms early on, and can even monitor the progression of the disease as the person uses their device from day to day. These symptoms have been difficult to measure quantitatively, which makes it more difficult for doctors to accurately treat their patients without close, lengthy observation. The software works by measuring the time taken to press and release a key- which for an average healthy individual is usually consistently around 100 milliseconds- and analyzes that data to decide if the user is taking an abnormal amount of time pressing the keys or has a large fluctuation in times. This, in theory, may indicate stiffness or slow movement of hands which are key signs of Parkinson's. More often than not, this software would be used on already-diagnosed individuals to monitor the progression of their disease. If the data seems to indicate a fluctuation of symptoms, doctors may be able to use this information to adjust treatment plans. The software was tested with both healthy individuals (the control group) and those with an early stage of Parkinson's. The results showed that, as hypothesized, the healthy individuals were consistent with time taken to press a key while those with Parkinson's had varied data.


     4 million people worldwide suffer from this disease, which means that this new method of early detection is incredibly sought after and vital to improving the quality of life for so many. With quantitative data available to neurologists treating Parkinson's patients, they may be able to more accurately treat their symptoms and collect research that may lead to an improved medication or even a cure.


Sources:
- https://www.eecs.mit.edu/news-events/media/monitoring-parkinsons-symptoms-home
- http://www.orionpharma.co.uk/Products-and-Services-Orion/Parkinsons-disease/10-facts-about-Parkinsons-disease/

Images:
- https://upload.wikimedia.org/wikipedia/commons/4/4e/Computer_keyboard.png
- https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmCXvIUfDI-2NCAU2SztGKCT3VQhH7-8YeeX5TQG7uyWTf8tdmoGIlvB1qyEGpF_KWWmTWQVIhw2qXl1n5WCjYofCup7Pr8LbbJ2rzKTBUHDJtqJppSQ1CbgAMp1M-l3hg1wqOW-OucLY/s1600/parkinson.png

Wednesday, November 2, 2016

Facial Recognition Technology

     From unlocking your tech devices to automagically accessing your account on an ATM, facial recognition technology is becoming more and more prevalent by the day. Many tech companies are looking for ways to substitute the common written or typed character password with a more intuitive and personal form of identification. I looked at one of these methods in a previous blog post about fingerprint scanning (you can see more about this here). 

    Facial recognition as an alternative to character passwords is certainly the most common use in every-day life, but this technology is also used heavily in law enforcement. For instance, video capturing technology may be used to record the faces of those involved in criminal acts. Stills can be taken from the video, which can then be resized and interpreted as a 3D model of their physical features. Computer algorithms are used to determine many different quantitative aspects of the 3D model, such as distance between a persons eyes, the length of their nose, the shape of their facial features, and so many more. The more aspects taken into consideration means better accuracy when identifying the person in question. Once these calculations are run, the computer can compare this data with existing points in the system and ultimately decide whether an arbitrary closeness threshold is met. 



      This form of technology is not without its downfalls. Many consider it to be an invasion of privacy, as there is the possibility that the data collected could be used without the consent of the person photographed. The tech news site WeLiveSecurity reports that in a recent study, it was shown that around 75% of US consumers would not visit a store that openly used facial recognition technology for marketing purposes. This data shows that while facial recognition technology is making great leaps in innovation, the views of the general public may limit its use due to privacy concerns.


Sources
- http://www.welivesecurity.com/2015/08/24/facial-recognition-technology-work/
- http://electronics.howstuffworks.com/gadgets/high-tech-gadgets/facial-recognition2.htm
http://tinyurl.com/zgqocz9


Wednesday, October 26, 2016

Hackathons

     Hackathons are put on across the world for people interested in technology to come together and build unique projects in a short period of time. These events are made to connect people with similar interests through teamwork and competition, and largely deal with Computer Science and coding. Participants brainstorm ideas and spend anywhere from 24-48 hours constructing their idea into a presentable product at a common location. Locations can be found in cities across the globe and are independently proposed and then organized through the MLH or Major League Hacking organization. Through grants and MLH organizers, the chosen location is set up with the best equipment to achieve any project idea, and with all of the necessary amenities for overnight stays.

     MLH aims to give people with interest in technology a place to come together, produce ideas, and participate in a unique collaborative computer programming project. With such a small amount of time to achieve a final product, many skip sleep and work nonstop. This has led to the criticism that these events encourage poorly-written/sloppy code and unhealthy sleeping and eating habits. Regardless, the final products of some of these hackathons produce truly innovative things that some use on a daily basis. GroupMe, for example, is a product of a hackathon project.


     The skills that students learn at these events are invaluable to careers in Computer Science. At many large companies, particularly Google, being able to work collaboratively and express your ideas to a team is vital to the job. Hackathons expose prospective programmers to their future career environment while allowing them to control the entirety of their project.

     If you're looking for a way to get involved, check out the MLH website for a list of hackathons near you! The only requirement is an interest in technology and a willingness to work in a collaborative environment.

Sources
https://mlh.io/
https://medium.com/hackathons-anonymous/wtf-is-a-hackathon-92668579601#.lu7qzfz4e

Monday, October 17, 2016

DDoS

Introduction   

      Distributed Denial of Service (DDoS) is a common cyber attack used to essentially take a server, website, or device offline. The resources that enable this kind of attack are, unfortunately, widely available and therefore it is also widely used. Small-scale attacks may require only very basic knowledge of computers, and botnets can be bought for relatively small amounts of money on online "blackmarkets". Although this is illegal, these attacks are extremely difficult to trace as the information is sent through multiple computers (hence Distributed) and can be hard to discern from regular network activity.


How Does it Work?

     The source of the attack begins by creating botnets -- a network of computers that act as agents of the attack, not necessarily owned by the source, which can be taken over and controlled remotely without the device owners knowledge. Once established, the botnets simultaneously attack the target in one of several ways; sending more connection requests than a server can handle or sending a bunch of random data to use up the targets bandwidth (how much data can be transferred from point A to point B in a set amount of time). This renders the targets' connection slow or completely cut off for however long the attack is active.


Prevention

    By installing proper antivirus software and/or using a firewall, you can prevent your computer from being used as an agent of an attack. One technique that popular sites and services might use to lower the chances of being subjected to an attack is bandwidth oversubscription, which makes it more difficult for the source to grow their botnet network large enough to successfully overwhelm the target. DDoS mitigation is also common, and attempts to monitor the amount of information being sent/received from each source and tunes out the "noise" that is the random data sent in some attacks. Overall, you probably won't be a victim of a DDoS attack unless you have a controversial website or piss off the wrong guy in a PvP match.



There are so many more components to and forms of DDoS attacks -- if you're interested, check out these sources!

http://www.digitalattackmap.com/understanding-ddos/
http://www.webopedia.com/TERM/D/DDoS_attack.html
https://www.us-cert.gov/ncas/tips/ST04-015
http://security.stackexchange.com/questions/73369/how-do-major-sites-prevent-ddos


Thursday, October 13, 2016

Why Do Self-Driving Cars Crash?



Overview of Select Incidents    

     With rising numbers of self-driving cars in production, reports of their failure are reported frequently and are met with concerns about the safety of these vehicles. Tesla boasts about the auto-pilot capability of some of their models, and while the general public may consider "auto-pilot" to mean a completely hands-off experience, Tesla insists that the driver never remove their hands from the steering wheel. Just as airline pilots are still required to man planes equipped with auto-pilot, Tesla acknowledges that the technology they have implemented cannot operate completely without human intervention without risking the safety of everyone on the road. A few particular incidents, although largely isolated in comparison to regular, manually-driven model accidents, outline the bugs that self-driven cars are facing now.

     The most talked-about crash in recent news was regarding the first death by a partially self-driven car, the Tesla Model S. The following model explains the basic circumstances of the crash:

tesla truck accident 3

The cause of the crash, which occurred while the car was in Autopilot mode, is assumed to be a failure of the obstacle-detection system. A camera and radar system interact with each other to determine obstacles around the car. The camera may have missed the truck due to its bright white exterior, too similar to the brightness of the sky that day to differentiate the two. As the CEO of Tesla explained shortly after, the radar system may have missed the obstacle due to the high ride height of the trailer - confusing it with a road sign that it is trained to ignore to prevent false braking events. This fatal combination of bugs, as well as the theory suggesting that the driver was not alert to the road conditions as the car tells you to be at all times no matter the mode setting, led to the first death by self-driven car.

     A second popularly reported crash occurred with a Google self-driven car, and thankfully did not result in injury. The following video shows footage of the crash:



Google explains in its statement that the car detected sandbags near the right side of the road while planning to turn right, waited for cars to pass, saw the truck and assumed it would yield to it to allow the car to maneuver around the sandbags, and subsequently moved directly into the side of the bus. Google also explains that these assumptions happen all the time in regular human-directed driving, which brings us to many important questions going forward in autonomous vehicle production: do we want these self-driven cars to act like human drivers, or do we want them to exhibit perfection? Should these assumptions be made by the vehicle, or should this aspect of driving be put aside in favor of absolute safety?

Pertinence to Computer Science

     The routines performed by these self-driving cars are laid out purely in computer programming. Millions upon millions of lines of code enable the car to guide the driver to their chosen destination, with ideally little to no interference on their part. With programming comes bugs, as we've seen through the very simple projects that we've done. However, when these programs hold human lives in the balance, there is no room for error. Before fully self-driven cars can be deployed to the public, extensive bug testing will have to occur and many new regulations met. Any mistake can prove, quite literally, to be fatal.

     In computer science courses, instructors stress that all conditions be considered and thorough bug testing be done. The incidents explained above show why thoroughness is such an important concept in this field, and how no matter how tedious it can be to consider every possible input/outcome, it will  certainly pay off when you have a flawless finished product (that doesn't kill anyone).


Sources:
http://www.nytimes.com/interactive/2016/07/01/business/inside-tesla-accident.html?_r=0
https://electrek.co/2016/07/01/understanding-fatal-tesla-accident-autopilot-nhtsa-probe/
https://www.engadget.com/2016/02/29/google-self-driving-car-accident/


Thursday, October 6, 2016

Pseudocode

Introduction

Pseudocode is defined as structured english for describing algorithms. We've used pseudocode to express what we want our code to do without using proper java syntax, but there are specific rules to writing pseudocode to make it readable for other developers that we should be familiar with.

Image result for pseudo code

Basics

    One of the first skills we learned in 150 was how to instantiate variables to a certain type and value. In pseudocode, we use the phrase "set <variable name> to <value>" to show this code. We do not have to tell the reader what type the variable is since it's obvious to us by the value alone, unlike the computer. We can also represent calls to other functions by using the phrase "call <function> with/returning <variable>/<what it returns>". 

Representing Loops

     These "constructs" are keywords used to express how we want the program to flow through statements of code. A sequence is used as a representation of linear progression through a set of tasks. This can be used inside of a loop or on its own. To represent loops we use roughly the same jargon as it appears in java with an added "end" statement, representing when body inside of the loop ends. For a while loop we'd use the keywords while and endwhile, for if statements if-then-else and endif, for for loops for and endfor, for switch statements case and endcase, and for do-while repeat-until. These replacements translate java to a more english-based representation, making it easier for people to follow your program and see the beginning and end of loops without the use of brackets.

Uses

     Pseudocode is a way for you to express what you want your program to do without worrying about the rules of a specific programming language. It may be useful to you when planning out a program to first outline a plan for implementation, including what you want each function or piece of code to do, and then begin implementation with a language of your choice. Pseudocode might also help you comment your finished product so that a user can more clearly see your train of thought.


Sources
- http://users.csc.calpoly.edu/~jdalbey/SWE/pdl_std.html
- http://guyhaas.com/bfoit/itp/Pseudocode.html

Wednesday, September 28, 2016

Snap Spectacles

Introduction

     Recently, Snapchat has rebranded itself as Snap Inc. and released plans for their new Snap Spectacles- a pair of glasses with wide-angle, circular video capturing capabilities. This allows for a fuller view than the regular smartphone camera, and a way to "see a memory in the way you experienced it", as the company said in their blog post.


     Comparisons were immediately made to a previous wearable model, Google Glass (you can see my blog post about that here).  While Glass was modeled as an extension of nearly every feature of a regular smartphone, Snap Spectacles are meant to work only with the Snapchat app. It is simple and designed to be fashionable- aspects that Google Glass lacked, and likely contributed to its failure. While Glass sold for upwards of $1,500, Spectacles are planned to launch with an initial price-tag of $130. Each product will come with a charging case, and a full charge is said to last approximately a day.

How it works

     With very little information available on the specifics as of now, I can only comment on the basic capabilities. For instance, the device will have a button or touchpad to begin recording video on the left side. This will trigger a circular series of lights around the camera lens to indicate that the device is recording. This will last 10 seconds, and then the information gathered will be sent to the device paired with Spectacles via Bluetooth or WiFi connection. In pseudocode, this might look something like:

while the device is on/has charge:
   if the button is pushed or touch is registered:
          if a connection is made to the external device:
                 turn lights on
                 begin recording sequence
                 send information to paired device
          else:
                 prompt user to pair a device with an error light

       The information sent to the device will be stored in the "Memories" section of the Snapchat app, which the user can then post to their story or save to their phones camera roll.


Sources
Images/Videos:
https://www.youtube.com/watch?v=XqkOFLBSJR8

Content:
http://www.latimes.com/business/technology/la-fi-tn-snapchat-spectacles-20160926-snap-story.html
https://www.snap.com/news
http://www.businessinsider.com/snapchat-spectacles-glasses-how-they-work-2016-9?op=0#





Wednesday, September 21, 2016

Basics of The Turing Test

                                        


    Background   


       Alan Turing, a mathematician, codebreaker, philosopher, and considered by some to be the founder of theoretical computer science and artificial intelligence (AI), laid the foundations of virtually everything having to do with research in AI. Most notably, he created the Turing Test, a test that determines the proficiency of a machine to exhibit or mimic human consciousness and complex behavior. This test is used still to this day to determine how advanced an AI creation is.



How does it work?

      
      The Turing Test is performed by recruiting three sources: the machine, the control human, and the interrogator We can consider these A, B and C respectively. C knows that either A or B is a machine, while the other isn't. It is C's job to ask A and B questions or engage in a conversation with them to determine which is the human (B). A attempts to mimic what a human may respond with in an attempt to trick C into believing that A is the human and B is the machine. The test has no set questions or answers, it is simply a test to see if a human interrogator can determine which of two sources is the machine. The goal of the machine is not to know every answer to the questions the interrogator asks, but to respond in a way that a human may be expected to respond. This makes sense, since you wouldn't expect a human to be all-knowing. The original test guidelines produced by Turing said that if the machine could trick the interrogator at least 70% of the time, it passes.



How this relates to Computer Science

        The machine used in this test is essential a program or collection of programs that work together to build responses based on key words that they interrogator uses in their questions. Basic AI is possible through user input and string analysis. For instance, a program could check if the string the user input contains certain words like "rain", "cloudy", "sunny", "hot", "cold", etc.. and respond in a way that may integrate weather, to show that it understands the meaning of those words. Implementation of the random operator could generate a random response among a list of responses to show that it is capable of expressing understanding in more than one way. For example, the program could be made to choose from a list of possible weather statements like "Tell me more about the weather.", "I hope it clears up tomorrow", etc.. This is a very rudimentary example, and would definitely not pass the Turing Test.


Sources:
https://en.wikipedia.org/wiki/Turing_test
http://www.biography.com/people/alan-turing-9512017
http://tinyurl.com/h4s67ev

Wednesday, September 14, 2016

How Fingerprint Scanning Works

Fingerprint scanning is the new norm for unlocking your devices, but have you ever wondered how they work and manage to be so accurate? 

Two Common Approaches

One technique for fingerprint scanning is to capture an image of the print under bright light. Also called electro-optical scanning, this method shows the peaks and valleys of a finger print (white and black spaces, respectively), and can be stored and used to compare to future user input. An algorithm is used to compare the black and white spaces of the two images, and if they are nearly identical, access would be granted.

   Another technique called capacitance is also used for fingerprint scanning. This method is commonly used by the IPhone 5 and above, and uses tiny capacitive cells smaller than the ridge of a fingerprint. These cells will close or remain open depending on if they are under a ridge or valley in the print, thereby closing a circuit or allowing current to pass. This information is stored, like the previous model, and used to compare to future input. This method is harder to fool as well, since it requires the shape of the print to work rather than just an image of the print.

Logic and How This Relates to Computer Science

  The logic behind comparing the two inputs can be thought of in very simple computer science terms. Just as we compare the equality of two numbers with == and two strings with .equals(), the two images are checked for similarity in roughly the same way. With electro-optical scanning, the image can be thought of as in binary - black and white corresponding to 1's and 0's. Logically we can say if a certain proportion of those match the original image, then grant access. Else, try again, lock the phone until a valid passcode is entered, or even delete all data on the device depending on your settings.

Conclusion

     Fingerprint scanning is just one part of an expanding realm of biometrics. Some devices have moved to facial recognition such as Windows Hello in Windows 10, while others implement iris scanning similar to what you might see in science-fiction movies. With passwords and personal information being stolen online constantly, I look forward to the day that I no longer have to input and memorize dozens of passwords, and can log in to my accounts for simply being myself.



Sources:
Graphics: http://tinyurl.com/gofbzwn, http://tinyurl.com/h22r2eb
Info: http://gizmodo.com/how-the-iphone-5ss-fingerprint-scanner-works-and-what-1265703794, http://www.windowscentral.com/how-set-windows-hello-facial-recognition-windows-10,

Tuesday, September 6, 2016

Google Glass as a Trailblazer for Modern Wearable Technology

 Background 

  In 2012, Google announced their new wearable piece of technology named Google Glass. Though this project is largely regarded as a failure, as it has since been taken off of the market and its development halted, the computer science behind it is fascinating and could very well have influenced aspects of other (more successful) wearable technologies such as the Apple Watch.


   The basic abilities of Glass included reminding the user of calendar events, giving directions, displaying to the wearer any alert activity on their phone, displaying weather and traffic updates, taking photos, performing Google searches, and lastly allowing video chats via Google Plus. To buffer the argument that this technology paved the way for other wearable successes; 5 out of the 7 basic capabilities listed are implemented on the Apple Watch.

How does it work?

   One of the main control functions of Google Glass was capacitive touch pad. Located on the right side of the wearable, the capacitive touch pad was essentially a weak electrostatic field across the screen. When something makes contact with this field (in this case, a finger), the controller chip detects this change and registers it as a touch. This chip recognizes several different movements or swipes, and interprets that into instructions for the system. For example, a horizontal swipe tells it to display one of the various menus available on the device and a downward swipe will either back out of one of those menus or put the device into sleep mode. This can be thought of as a series of "if" statements, and the resulting action as the following statements in the braces (Ex. If a downward swipe is detected, trigger sleep mode).

Other sensors on the Glass include a bone conduction speaker which sends vibrations through your skull and into your inner ear eliminating the need for ear pieces, a proximity sensor and ambient light sensor that allow the device to detect if it is being worn as well as certain eye movements that can act as commands, and finally an inertial sensor that detects motion such as leaning your head in certain ways which can also act as a command to the device to "wake up".

Google Glass Today

     While the devices are not being sold by Google any longer, many are being sold second-hand on sites such as Amazon and Ebay. Most people have abandoned the expensive accessory, but Google still offers guides for developers looking to create their own Glass content (https://developers.google.com/glass/). On the official Glass site, Google mentions that a future version will be available "when it's ready". While this isn't by any means a formal announcement, it's exciting to look forward to a far more advanced wearable product by this massive company.


Innovation requires failure, and in this case, Google Glass needed to be the failure that led to greater innovation, design, and implementation.



Sources: 
http://electronics.howstuffworks.com/gadgets/other-gadgets/project-glass2.htm
http://tinyurl.com/hs6f55v




Friday, September 2, 2016

Apple vs. FBI and the San Bernardino iPhone Conflict

Background and Conflict

   In the aftermath of the San Bernardino attack, the FBI had in their possession an iPhone belonging to the perpetrator. They were initially unable to unlock the phone and gain access to the data stored on it as it was protected by encryption and passcode with an automatic data wipe if the user fails to correctly enter the code a certain number of times. The FBI approached Apple, asking that a backdoor be created to breach the security of the iPhone and allow them access to data that they believed could fill in key gaps in their timeline of events during the attack. Apple then declined, and in their official statement said that they have cooperated with authorities and valid search warrants, have given data in their possession to the FBI for investigation, and have lent Apple engineers to assist the bureau in determining their options through which they can solve their issues with the technology. They noted that although they have complied up to that point in time, the FBI had requested they make a version of iOS that would breach the security of the iPhone and install it on the device in question -- something they were unwilling to do.


   While this may seem a reasonable request on the FBI's part, Apple points out that the implications of the creation of this sort of backdoor are potentially dire. Citing their official statement, CEO Tim Cook states that if this version is placed into the wrong hands, it could fundamentally compromise security on all iPhones. Law enforcement insisted that it was a "one-off" deal, and that this version of iOS would solely be used for this case. While it would be nice if we could believe that this could be kept secure in our governments hands, many have doubts about the intentions of our system, especially after Snowden. By creating this version, Apple could give the government and possibly black-hat hackers the opportunity to compromise our already dwindling level of privacy and security.


Outcomes

 Though this story is relatively old news now, it holds key ideas about cybersecurity and the grey area that exists between privacy and defense. This situation made me question the values I put on my privacy and how much of that I would be willing to give up for the safety of my country. Regardless, it doesn't seem that I have much of a say in this, as access was gained to the data on the phone without Apple's assistance. This leads us to question whether the FBI and their agents themselves gained access, or whether an outside "gray-hat" hacker or hacker group was hired to break in. Either way, the fact that access was gained suggests an existing loophole in the iOS - one that could compromise the privacy of any iPhone user.



Sources:
Content: http://www.apple.com/customer-letter/,
http://www.cnbc.com/2016/03/29/apple-vs-fbi-all-you-need-to-know.html,
http://www.theverge.com/2016/3/28/11317396/apple-fbi-encryption-vacate-iphone-order-san-bernardino

Graphics: http://tinyurl.com/jrqxhqo

Wednesday, August 31, 2016

Teaching Children to Code with Minecraft

   As a kid, I enjoyed playing the early versions of what is now known as one of the most successful video games (in terms of copies sold) in the world - Minecraft. Though most popular among younger age groups, Minecraft benefitted from the presence of a much older demographic, namely 20-35 year-old men, who spent time creating modifications and plugins that revolutionized multiplayer Minecraft servers and turned them into gold mines. Initially, these modifications (and servers themselves even more so) were difficult for those who had no prior in-depth knowledge of computers to create, install, and/or run. As many of the children who played Minecraft discovered how these things were accomplished, they unintentionally taught themselves basic computer organization. It was not long after that it was proposed that this could be an excellent way to introduce children to Computer Science.


   To teach children the basics of Computer Science, instructors have found that instead of introducing them to lines of text that appear to be random and confusing to beginners (like the typical "Hello World" starter program), it is more practical to begin with something they are already acquainted with. Minecraft, being as popular as it is, provides children with a foundation of knowledge of the existing game mechanics and likely even partial knowledge of the means by which coders modify those mechanics. It comes as a convenience that Minecraft itself is fairly easy to modify, with software to teach anything from how to modify the games' basic mechanics to changing the entire appearance of the world through a more artistic perspective (LearnToMod), and even having its own language (Skript) to work with. By connecting Computer Science to something seen as exciting and very heavily visual, the students are much more likely to learn concepts as they can actively experience the results of their work, thoroughly enjoy it and want to learn more.


   While the majority of instructors use various block-based high-level languages (similar to Scratch, which if you are not familiar, is also commonly used as a Computer Science teaching technique for young children), it is also possible to begin with common languages such as Python or Java. This head start in a field that is in such high job demand in our technological era will likely prove invaluable to children as a boost into a job market that is heavily dependent on workers with these skills.


Sources:

Content - http://www.learntomod.com
https://dev.bukkit.org/bukkit-plugins/skript/,
https://www.engadget.com/2015/11/17/minecraft-hour-of-code-tutorial/,
http://fortune.com/2015/11/16/minecraft-microsoft-code/,
http://www.youthdigital.com/mod-design-1.html

Graphics - http://tinyurl.com/hr5gvbw,
http://tinyurl.com/z2f3ww5,
http://tinyurl.com/h47587r