Site icon Tutor Bin

MU Is Google Making Us Stupid Essay

MU Is Google Making Us Stupid Essay

Description

there’s a two article needs to be read and write the essay about “is google is stupid “

A persuasive essay attempts to convince the audience to accept or at least respect your position on a controversial topic: this type of essay appeals to the audience through sound, logical reasoning (argumentation), and emotional appeals (persuasion).

You will start this assignment by choosing one of the selected videos from this section. You will then develop an argument of at least 900 words that either supports or refutes a related issue of technology raised in the chosen video. For example, a student might argue against Sherry Turkle’s assertion that social media has “blurred the definition” of intimacy for young people in America, or you could argue that counter to Nicholas Carr’s warning’s, having one’s brain connected directly to the internet is simply the next stage in human evolution. Regardless, the essay must pick a side of the issue and develop an argument that is supported with sound logical reasoning and evidence. 

this is the first article 

All research must be documented according to the MLA format. Those sources are listed in the folders below. There is also an MLA Style Guide.NOTE: Review the instructions at http://support.ebsco.com/help/?int=ehost&lang=&fea… and make any necessary corrections before using. Pay special attention to personal names, capitalization, and dates. Always consult your library resources for the exact formatting and punctuation guidelines.

Works Cited
Liebelson, Dana. “Do Androids Dream Of Electric Lolcats?.” Mother Jones 39.5 (2014): 5. Points of View

Reference Center. Web. 10 Nov. 2015. <!­­Additional Information:

Persistent link to this record (Permalink): http://search.ebscohost.com/login.aspx? direct=true&db=pwh&AN=97499417&site=pov­live
End of citation­­>

Section:
OUT FRONT BRAIN TRUST

DO ANDROIDS DREAM OF ELECTRIC LOLCATS?
Cool: Computers that think like human brains. Creepy: Early adopters include Facebook and the NSA
In June 2012, a Google supercomputer made an artificial­intelligence breakthrough: It learned that the internet loves cats. But here’s the remarkable part: It had never been told what a cat looks like. Researchers working on the Google Brain project in the company’s X lab fed 10 million random, unlabeled images from YouTube into their massive network and instructed it to recognize the basic elements of a picture and how they fit together. Left to their own devices, the Brain’s 16,000 central processing units noticed that a lot of the images shared similar characteristics that it eventually recognized as a “cat.” While the Brain’s self­taught knack for kitty spotting was nowhere as good as a human’s, it was nonetheless a major advance in the exploding field of deep learning.

The dream of a machine that can think and learn like a person has long been the holy grail of computer scientists, sci­fi fans, and futurists alike. Deep learning ­­ algorithms inspired by the human brain and its ability to soak up massive amounts of information and make complex predictions ­­ might be the closest thing yet. Right now, the technology is in its infancy: Much like a baby, the Google Brain taught itself how to recognize cats, but it’s got a long way to go before it can figure out that you’re sad because your tabby died. But it’s just a matter of time. Its potential to revolutionize everything from social networking to surveillance has sent tech companies and defense and intelligence agencies on a deep­learning spending spree.

What really puts deep learning on the cutting edge of artificial intelligence (AI) is that its algorithms

can analyze things like human behavior and then make sophisticated predictions. What if a social­ networking site could figure out what you’re wearing from your photos and then suggest a new dress? What if your insurance company could diagnose you as diabetic without consulting your doctor? What if a security camera could tell if the person next to you on the subway is carrying a bomb?

And unlike older data­crunching models, deep learning doesn’t slow down as you cram in more info. Just the opposite ­­ it gets even smarter. “Deep learning works better and better as you feed it more data,” explains Andrew Ng, who oversaw the cat experiment as the founder of Google’s deep­learning team. (Ng has since joined the Chinese tech giant Baidu as the head of its Silicon Valley AI team.)

And so the race to build a better virtual brain is on. Microsoft plans to challenge the Google Brain with its own system called Adam. Wired reported that Apple is applying deep learning to build a “neural­net­boosted Siri.” Netflix hopes the technology will improve its movie recommendations. Google, Yahoo, and Pinterest have snapped up deep­learning companies; Google has used the technology to read every house number in France in less than an hour. “There’s a big rush because we think there’s going to be a bit of a quantum leap,” says Yann LeCun, a deep­learning pioneer and the head of Facebook’s new AI lab.

Last December, Facebook CEO Mark Zuckerberg appeared, bodyguards in tow, at the Neural Information Processing Systems conference in Lake Tahoe, where insiders discussed how to make computers learn like humans. He has said that his company seeks to “use new approaches in AI to help make sense of all the content that people share.” Facebook researchers have used deep learning to identify individual faces from a giant database called “Labeled Faces in the Wild” with more than 97 percent accuracy. Another project, dubbed PANDA (Pose Aligned Networks for Deep Attribute Modeling), can accurately discern gender, hairstyles, clothing styles, and facial expressions from photos. LeCun says that these types of tools could improve the site’s ability to tag photos, target ads, and determine how people will react to content.

Yet considering recent news that Facebook secretly studied 700,000 users’ emotions by tweaking their feeds or that the National Security Agency harvests 55,000 facial images a day, it’s not hard to imagine how these attempts to better “know” you might veer into creepier territory.

Not surprisingly, deep learning’s potential for analyzing human faces, emotions, and behavior has attracted the attention of national­security types. The Defense Advanced Research Projects Agency has worked with researchers at New York University on a deep­learning program that sought, according to a spokesman, “to distinguish human forms from other objects in battlefield or other military environments.”

Chris Bregler, an NYU computer science professor, is working with the Defense Department to enable surveillance cameras to detect suspicious activity from body language, gestures, and even cultural cues. (Bregler, who grew up near Heidelberg, compares it to his ability to spot German tourists in Manhattan.) His prototype can also determine whether someone is carrying a concealed weapon; in theory, it could analyze a woman’s gait to reveal she is hiding explosives by pretending

to be pregnant. He’s also working on an unnamed project funded by “an intelligence agency” ­­ he’s not permitted to say more than that.

And the NSA is sponsoring deep­learning research on language recognition at Johns Hopkins University. Asked whether the agency seeks to use deep learning to track or identify humans, spokeswoman Vanee’ Vines only says that the agency “has a broad interest in deriving knowledge from data.”

Deep learning also has the potential to revolutionize Big Data­driven industries like banking and insurance. Graham Taylor, an assistant professor at the University of Guelph in Ontario, has applied deep­learning models to look beyond credit scores to determine customers’ future value to companies. He acknowledges that these types of applications could upend the way businesses treat their customers: “What if a restaurant was able to predict the amount of your bill, or the probability of you ever returning? What if that affected your wait time? I think there will be many surprises as predictive models become more pervasive.”

Privacy experts worry that deep learning could also be used in industries like banking and insurance to discriminate or effectively redline consumers for certain behaviors. Sergey Feldman, a consultant and data scientist with the brand personalization company RichRelevance, imagines a “deep­learning nightmare scenario” in which insurance companies buy your personal information from data brokers and then infer with near­total accuracy that, say, you’re an overweight smoker in the early stages of heart disease. Your monthly premium might suddenly double, and you wouldn’t know why. This would be illegal, but, Feldman says, “don’t expect Congress to protect you against all possible data invasions.”

And what if the computer is wrong? If a deep­learning program predicts that you’re a fraud risk and blacklists you, “there’s no way to contest that determination,” says Chris Calabrese, legislative counsel for privacy issues at the American Civil Liberties Union.

Bregler agrees that there might be privacy issues associated with deep learning, but notes that he tries to mitigate those concerns by consulting with a privacy advocate. Google has reportedly established an ethics committee to address AI issues; a spokesman says its deep­learning research is not primarily about analyzing personal or user­specific data ­­ for now. While LeCun says that Facebook eventually could analyze users’ data to inform targeted advertising, he insists the company won’t share personally identifiable data with advertisers.

“The problem of privacy invasion through computers did not suddenly appear because of AI or deep learning. It’s been around for a long time,” LeCun says. “Deep learning doesn’t change the equation in that sense, it just makes it more immediate.” Big companies like Facebook “thrive on the trust users have in them,” so consumers shouldn’t worry about their personal data being fed into virtual brains. Yet, as he notes, “in the wrong hands, deep learning is just like any new technology.”

Deep learning, which also has been used to model everything from drug side effects to energy demand, could “make our lives much easier,” says Yoshua Bengio, head of the Machine Learning

Laboratory at the University of Montreal. For now, it’s still relatively difficult for companies and governments to efficiently sift through all our emails, texts, and photos. But deep learning, he warns, “gives a lot of power to these organizations.”

PHOTO (COLOR)

~~~~~~~~
By Dana Liebelson

Copyright of Mother Jones is the property of Foundation for National Progress and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use.

and here’s the second one

.EBSCO Publishing Citation Format: MLA (Modern Language Assoc.):

NOTE: Review the instructions at http://support.ebsco.com/help/?int=ehost&lang=&fea… and make any necessary corrections before using. Pay special attention to personal names, capitalization, and dates. Always consult your library resources for the exact formatting and punctuation guidelines.

Works Cited
Drew, Turney. “Rebooting your memory.” Sun­Herald, The (Sydney) 03 Nov. 2013: 13. Points of View

Reference Center. Web. 10 Nov. 2015. <!­­Additional Information:

Persistent link to this record (Permalink): http://search.ebscohost.com/login.aspx? direct=true&db=pwh&AN=SYD­6CHFGEMW1FCY0XY7NHB&site=pov­live
End of citation­­>

Edition: First, Section: TV Liftout, pg. 13

Rebooting your memory DIGITAL LIFE
~~~~~~~~
Drew Turney

PHOTO: Thought patterns: We’ve never had such repositories of information at our disposal before. Photo:iStock

After millions of years of remembering what matters, is the way memory works changing? Drew Turney investigates.

As we offload more of the storage of information onto technology, might we be losing the art of remembering for ourselves?

As early as 2002 writer Cory Doctorow, speaking about whether his blog were to disappear, said: “Huge swathes of acquired knowledge would simply vanish … my blog frees me up from having to remember the minutiae of my life”.

We all know the feeling, as we recruit machinery to stand in for our memories more and more the busier life gets.

But is doing so changing us? When we can reach into the networks and airwaves to pluck out information with impunity, is the technosphere our new collective memory?

We’ve always relied on external sources ­ including technology ­ for information storage; whether it’s putting a calendar note in your phone or asking the oldest tribe member where the waterhole is.

But we’re actually better able to remember where to find the information we need than the information itself, according to a recent study about Google’s effects on memory from New York’s Columbia University. Subjects were also less likely to remember something when they knew they could look it up online later.

The tacit conclusion is that if we know we have access to knowledge through what’s called “distributed cognition” or “transactive memory” ­ the web, other members of the tribe ­ we don’t bother remembering it.

Georgetown University neurologist and bioethicist James Giordano calls those mental prompts “identicants”. “Rather than relatively complete ideas, we’ll initially recall iconic labels as placeholders to engage technologies to retrieve them,”

he says.

Instead of internalising the torrent of information that characterises the modern age, it’s tempting to think we could just clear all those messy little factoids out and have machines remember them for us. Thus our mental capacity will be free for deeper, abstract or creative thought.

But Ian Robertson, psychologist and author of The Winner Effect, warns that even though you might be less stressed in doing so, we can’t think of the brain like a computer with finite disk space.

“Your brain doesn’t get full ­ the permutations of connectivity are almost infinite,” he says. “The more you learn, the more you can learn. More things connect to other aspects of your memory and that makes you more skilled at storing and pulling them out.”

A better way to look at how technology is affecting our memory might be the reason behind how and why we remember things in the first place. Many memories ­ even simple ones ­ are tinged with emotion. Your bank’s phone number is going to mean something very different from the mobile number of a beloved in a new relationship, for example.

As Flinders University psychologist Jason McCarley points out, the Columbia study was conducted with random facts that didn’t necessarily mean anything to the subjects. “It seems less likely we’d offload memory for information that’s meaningful or important,” he says. “So the idea that technology will compromise our general quality of thought or creativity is likely overwrought.”

Macquarie University psychologist Amanda Barnier says we’re not only meaning­making machines, we add the dimension of context, which makes raw information workable.

“If the task of cognition is to make sense of things and make them relevant in everyday life, a computer can’t do that for you.”

We can also choose what information deserves deeper consideration through the simple act of

paying closer attention when we know it will do something for us, whereas a computer gives every input equal weight ­ from a forgettable joke on Facebook to your online banking password. Repeated focus on something files it away beyond the hippocampus ­ the brain’s memory acquisition apparatus ­ and it becomes another of the millions of mental units available for instant recall.

The emotion and focus of holding information internally also comes with an appreciation of its potential meaning. In fact, having to go beyond our borders for information might even tax the mental resources we should be putting to better use. After all, our brains have evolved to synthesise facts, not signposts. “Knowing is critical as a foundation for new and creative thinking that extends out from that base,” says psychologist Cliff Abraham of the University of Otago in New Zealand.

“If you’re going to be successful in a profession, you need to collect a lot of information,” says University of NSW professor of neuropsychiatry Perminder Sachdev. “If you don’t have readily accessible information in your head but just try to get it from other sources, it’s going to be difficult for it to lead to creative thought.”

But when we need to augment what we know and remember with the wisdom of the crowd, technology enables it like it never has before. “Is technology affecting our memory and how we learn?” asks computational neuroscientist Paul King. “Certainly. For those with curiosity, learning has

become more self­directed

and dynamic.”

So the question might not be whether technology is affecting the way we remember things, but how. Sure, we’ve never had such repositories of information at our disposal before. But after millions of years of remembering what matters, the way we remember isn’t going to fundamentally change any time soon. Because so much of Cory Doctorow’s minutiae can be stored off­brain efficiently,

we may be facing the best of both worlds.

Copyright 2013 John Fairfax Publications Pty Limited. www.smh.com.au. Not available for re­ distribution.

Have a similar assignment? "Place an order for your assignment and have exceptional work written by our team of experts, guaranteeing you A results."

Exit mobile version