第170页
- 第1页
- 第2页
- 第3页
- 第4页
- 第5页
- 第6页
- 第7页
- 第8页
- 第9页
- 第10页
- 第11页
- 第12页
- 第13页
- 第14页
- 第15页
- 第16页
- 第17页
- 第18页
- 第19页
- 第20页
- 第21页
- 第22页
- 第23页
- 第24页
- 第25页
- 第26页
- 第27页
- 第28页
- 第29页
- 第30页
- 第31页
- 第32页
- 第33页
- 第34页
- 第35页
- 第36页
- 第37页
- 第38页
- 第39页
- 第40页
- 第41页
- 第42页
- 第43页
- 第44页
- 第45页
- 第46页
- 第47页
- 第48页
- 第49页
- 第50页
- 第51页
- 第52页
- 第53页
- 第54页
- 第55页
- 第56页
- 第57页
- 第58页
- 第59页
- 第60页
- 第61页
- 第62页
- 第63页
- 第64页
- 第65页
- 第66页
- 第67页
- 第68页
- 第69页
- 第70页
- 第71页
- 第72页
- 第73页
- 第74页
- 第75页
- 第76页
- 第77页
- 第78页
- 第79页
- 第80页
- 第81页
- 第82页
- 第83页
- 第84页
- 第85页
- 第86页
- 第87页
- 第88页
- 第89页
- 第90页
- 第91页
- 第92页
- 第93页
- 第94页
- 第95页
- 第96页
- 第97页
- 第98页
- 第99页
- 第100页
- 第101页
- 第102页
- 第103页
- 第104页
- 第105页
- 第106页
- 第107页
- 第108页
- 第109页
- 第110页
- 第111页
- 第112页
- 第113页
- 第114页
- 第115页
- 第116页
- 第117页
- 第118页
- 第119页
- 第120页
- 第121页
- 第122页
- 第123页
- 第124页
- 第125页
- 第126页
- 第127页
- 第128页
- 第129页
- 第130页
- 第131页
- 第132页
- 第133页
- 第134页
- 第135页
- 第136页
- 第137页
- 第138页
- 第139页
- 第140页
- 第141页
- 第142页
- 第143页
- 第144页
- 第145页
- 第146页
- 第147页
- 第148页
- 第149页
- 第150页
- 第151页
- 第152页
- 第153页
- 第154页
- 第155页
- 第156页
- 第157页
- 第158页
- 第159页
- 第160页
- 第161页
- 第162页
- 第163页
- 第164页
- 第165页
- 第166页
- 第167页
- 第168页
- 第169页
- 第170页
- 第171页
- 第172页
- 第173页
- 第174页
- 第175页
- 第176页
- 第177页
- 第178页
- 第179页
- 第180页
- 第181页
- 第182页
- 第183页
- 第184页
- 第185页
- 第186页
- 第187页
- 第188页
- 第189页
People are talking a lot about artificial intelligence (AI), viewing it as a force that could reshape how society works. But there is something important missing from this discussion. It isn’t enough to ask how it will change us. We also need to understand how we shape AI and what it can tell us about ourselves.
Every AI model we develop mirrors our rules and expresses our beliefs. A few years ago, while looking for new workers, a famous company gave up an AI-powered tool after finding it unfavorable to women. The AI was not designed to behave this way. Instead, it was influenced by the historical data (数据) favoring men. Similarly, a recent study found that lending algorithms (算法) often offer less favorable terms to colored people, worsening long-standing unfairness in money-lending business. In both cases, AI isn’t creating new biases (偏见); it is mirroring the ones that are already present.
These reflections (反映) give us an important chance to take a close look at ourselves. By making these problems seen and more pressing, AI challenges us to recognize and address what causes algorithmic bias. As AI continues to develop, we must ask ourselves how we as average people want to shape its role in society. We should not only improve AI models, but also make sure that AI is developed and used responsibly.
A number of companies are already taking action. They are judging the data, rules, and beliefs that shape the behavior of AI models. Still, we cannot expect the companies to do all the work. As long as AI is trained on human data, it will reflect human behavior. That means we have to think carefully about the footprints of ourselves we leave in the world. I may value privacy, but if I give it up in a heartbeat to visit a website, the algorithms may make a very different judgment of what I really want and what is good for me. If I want meaningful human connections yet spend more time on social media and less time in the physical company of my friends, I am indirectly training AI models about the true nature of humanity.
As AI becomes more powerful, we need to take increasing care to read our principles (原则) into the record of our actions rather than allowing the two to diverge. Recognizing this allows us to make better decisions, but only when we are prepared to look closely and take responsibility for what we see.
1. Why does the writer introduce the two examples in Paragraph 2?
(
2. What does the underlined word “diverge” in the last paragraph most probably mean?
(
3. According to the passage, what is a good example of shaping AI responsibility?
(
4. Which of the following is the best title for this passage?
(
Every AI model we develop mirrors our rules and expresses our beliefs. A few years ago, while looking for new workers, a famous company gave up an AI-powered tool after finding it unfavorable to women. The AI was not designed to behave this way. Instead, it was influenced by the historical data (数据) favoring men. Similarly, a recent study found that lending algorithms (算法) often offer less favorable terms to colored people, worsening long-standing unfairness in money-lending business. In both cases, AI isn’t creating new biases (偏见); it is mirroring the ones that are already present.
These reflections (反映) give us an important chance to take a close look at ourselves. By making these problems seen and more pressing, AI challenges us to recognize and address what causes algorithmic bias. As AI continues to develop, we must ask ourselves how we as average people want to shape its role in society. We should not only improve AI models, but also make sure that AI is developed and used responsibly.
A number of companies are already taking action. They are judging the data, rules, and beliefs that shape the behavior of AI models. Still, we cannot expect the companies to do all the work. As long as AI is trained on human data, it will reflect human behavior. That means we have to think carefully about the footprints of ourselves we leave in the world. I may value privacy, but if I give it up in a heartbeat to visit a website, the algorithms may make a very different judgment of what I really want and what is good for me. If I want meaningful human connections yet spend more time on social media and less time in the physical company of my friends, I am indirectly training AI models about the true nature of humanity.
As AI becomes more powerful, we need to take increasing care to read our principles (原则) into the record of our actions rather than allowing the two to diverge. Recognizing this allows us to make better decisions, but only when we are prepared to look closely and take responsibility for what we see.
1. Why does the writer introduce the two examples in Paragraph 2?
(
D
)2. What does the underlined word “diverge” in the last paragraph most probably mean?
(
C
)3. According to the passage, what is a good example of shaping AI responsibility?
(
B
)4. Which of the following is the best title for this passage?
(
A
)
答案:
1. D 解析:推理判断题。根据第二段中“In both cases, AI isn’t creating new biases; it is mirroring the ones that are already present.”可知,作者认为人工智能并没有创造新的偏见,而是反映了已经存在的偏见。第二段中提到的招聘时由人工智能驱动的工具对女性更不利,贷款算法对有色人种更为不利,这两个例子印证了作者的观点。
2. C 解析:词义猜测题。根据 we need to take increasing care to read our principles into the record of our actions rather than allowing the two to diverge 可知,在人工智能发展的过程中,我们需要将我们的原则融入到我们的行动记录中,而不是让两者背道而驰。diverge 意为“分歧,背离”,与 separate 意思相近。
3. B 解析:推理判断题。根据“As long as AI is trained on human data, it will reflect human behavior. That means we have to think carefully about the footprints of ourselves we leave in the world.”可知,只要人工智能是基于人类数据训练的,它就会反映人类行为。这意味着我们必须认真思考我们在世界上留下的足迹。作者认为人类应谨慎对待自身行为对人工智能的影响,即“注意我们对人工智能模型的输入”。
4. A 解析:标题概括题。通读全文可知,文章核心观点是人工智能并非问题的根源,而是人类社会偏见的反映,因此人类需要为人工智能的发展负责。选项 A 最贴合主旨。
[长难句解读]
I may value privacy, but if I give it up in a heartbeat to visit a website, the algorithms may make a very different judgment of what I really want and what is good for me.
翻译:我可能重视隐私,但如果我毫不犹豫地为了访问一个网站而放弃它,算法可能会对我真正想要的东西以及什么对我有益做出完全不同的判断。
句式结构分析:这是一个由 but 连接的并列句。前半句是简单句;后半句包含一个由 if 引导的条件状语从句,其中 it 指代前文的 privacy,in a heartbeat 和 to visit a website 分别作状语;后半句的主句是“the algorithms may make a very different judgment of ...”,the algorithms 为主语,may make 为谓语,a very different judgment 为宾语,of 后接两个由 what 引导的并列宾语从句。
2. C 解析:词义猜测题。根据 we need to take increasing care to read our principles into the record of our actions rather than allowing the two to diverge 可知,在人工智能发展的过程中,我们需要将我们的原则融入到我们的行动记录中,而不是让两者背道而驰。diverge 意为“分歧,背离”,与 separate 意思相近。
3. B 解析:推理判断题。根据“As long as AI is trained on human data, it will reflect human behavior. That means we have to think carefully about the footprints of ourselves we leave in the world.”可知,只要人工智能是基于人类数据训练的,它就会反映人类行为。这意味着我们必须认真思考我们在世界上留下的足迹。作者认为人类应谨慎对待自身行为对人工智能的影响,即“注意我们对人工智能模型的输入”。
4. A 解析:标题概括题。通读全文可知,文章核心观点是人工智能并非问题的根源,而是人类社会偏见的反映,因此人类需要为人工智能的发展负责。选项 A 最贴合主旨。
[长难句解读]
I may value privacy, but if I give it up in a heartbeat to visit a website, the algorithms may make a very different judgment of what I really want and what is good for me.
翻译:我可能重视隐私,但如果我毫不犹豫地为了访问一个网站而放弃它,算法可能会对我真正想要的东西以及什么对我有益做出完全不同的判断。
句式结构分析:这是一个由 but 连接的并列句。前半句是简单句;后半句包含一个由 if 引导的条件状语从句,其中 it 指代前文的 privacy,in a heartbeat 和 to visit a website 分别作状语;后半句的主句是“the algorithms may make a very different judgment of ...”,the algorithms 为主语,may make 为谓语,a very different judgment 为宾语,of 后接两个由 what 引导的并列宾语从句。
查看更多完整答案,请扫码查看