025-8310-8080
文都服务时间:9:00~22:00

考研信息
考研时间
考研信息
考研学校
考研难度
考研科目
本科考研
考研分数线
课程推荐
百日冲刺营
在职考研
考研密训营
MBA特训班
医学硕士
艺术考研
考研真题
考研政治真题答案
考研英语真题答案
考研数学真题答案
图书资料
英语图书
数学图书
政治图书
专硕图书
您所在的位置: 南京文都考研 > 考研专业知识 > 考研英语 > 2023考研英语同源外刊文章:优化算法很有可能始终不容易真真正正弄明白人们

2023考研英语同源外刊文章:优化算法很有可能始终不容易真真正正弄明白人们

距2021年考研倒计时

2023考研英语同源外刊文章:优化算法很有可能始终不容易真真正正弄明白人们

考研英语的备考,必须学生很多阅读文章外刊文章。由于阅读和理解考试真题里边的文章,大多数喜爱用外刊。因而,要想扩宽英文知识层面和改进外语水平,大家必须培养每日抽一点時间来说的习惯性!下面,我为众多2023研究生考试学生们得出-考研英语同源外刊文章:优化算法很有可能始终不容易真真正正弄明白人们,供学生阅读文章。

2023考研英语同源外刊文章:优化算法很有可能始终不容易真真正正弄明白人们

An unlikely scandal engulfed the British government last month. After COVID-19 forced the government to cancel the “A-level” exams that help determine university admission, the British education regulator used an algorithm to predict what score each student would have received on their exam. The algorithm relied in part on how the school’s students had historically fared on the exam. Schools with richer children tended to have better track records, so the algorithm gave affluent students — even those on track for the same grades as poor students — much higher predicted scores. High-achieving, low-income pupils whose schools had not previously performed well were hit particularly hard. After threats of legal action and widespread demonstrations, the government backed down and scrapped the algorithmic grading process entirely. This wasn’t an isolated incident: In the United States, similar issues plagued the International Baccalaureate exam, which used an opaque artificial intelligence system to set students’ scores, prompting protests from thousands of students and parents.

上一个月,一桩不大可能的丑事风靡了英国政府。在新冠肺炎肺炎疫情驱使政府部门取消了协助明确高考录取的“A-level”考题后,英国教育监管部门应用了一种数学模型来预测分析每一个学员的考试分数。该优化算法一部分在于贵校学员过去在测试中的主要表现。家世殷实的小孩入读的院校通常考试成绩更强,因而该优化算法给家世殷实的小孩子的预测分析成绩要好很多——即使这些家世殷实的小孩与家境贫寒的小孩考试成绩同样。所属院校过去主要表现欠佳的成绩优异的特困户家中学员遭受的打压特别是在比较严重。在遭受法律法规行为和普遍游行示威的危害后,政府部门作出了妥协,并彻底取消了优化算法得分程序流程。这并非一个独立的事情:在国外,相近的问题也困惑着国际性学历考题(International Baccalaureate exam),该考试应用一种不通透的人工智能技术系统软件来设置学员的成绩,引起了不计其数的学生们和家长的强烈抗议。

These episodes highlight some of the pitfalls of algorithmic decision-making. As technology advances, companies, governments, and other organizations are increasingly relying on algorithms to predict important social outcomes, using them to allocate jobs, forecast crime, and even try to prevent child abuse. These technologies promise to increase efficiency, enable more targeted policy interventions, and eliminate human imperfections from decision-making processes. But critics worry that opaque machine learning systems will in fact reflect and further perpetuate shortcomings in how organizations typically function — including by entrenching the racial, class, and gender biases of the societies that develop these systems.

这种事情突显了优化算法管理决策的一些缺点。伴随着技术水平的发展,公司、政府部门和其他组织愈来愈多地依靠优化算法来预测分析关键的社会发展結果,运用他们来确定工作中、预测分析违法犯罪,乃至尝试避免虐童。这种技术性有希望更改高效率,使现行政策干涉更具有目的性,并清除管理决策流程中的人为因素缺点。但指责人员担忧,不通透的人工神经网络系统软件事实上可能体现并进一步持续机构一切正常运行中的缺点——包含推进了发展趋势这种规章制度的社会发展人种、无产阶级和胎儿性别成见。

But there is an even more basic concern about algorithmic decision-making. Even in the absence of systematic class or racial bias, what if algorithms struggle to make even remotely accurate predictions about the trajectories of individuals’ lives? That concern gains new support in a recent paper published in the Proceedings of the National Academy of Sciences. The paper describes a challenge, organized by a group of sociologists at Princeton University, involving 160 research teams from universities across the country and hundreds of researchers in total. These teams were tasked with analyzing data from the Fragile Families and Child Wellbeing Study, an ongoing study that measures various life outcomes for thousands of families who gave birth to children in large American cities around 2000. It is one of the richest data sets available to researchers: It tracks thousands of families over time, and has been used in more than 750 scientific papers.

但针对优化算法管理决策还有一个更基本上的忧虑。即使在沒有系统软件的无产阶级或人种成见的情形下,假如优化算法难以对本人的生活轨迹作出就算是略微精准一点的预测分析,那应该怎么办?这类忧虑在近期发布在《美国国家科学院院刊》(Proceedings of the National Academy of Sciences)上的一篇毕业论文中得到了新的适用。该文章阐述了一项由布朗大学的一群理论家机构的试炼,来源于全国各地各高校的160个实验工作组和一共百余名科研工作人员参加了这一试炼。这种工作组的日常任务是剖析来源于“敏感家中与幼儿健康科学研究”的数据信息,这也是一项已经开展的科学研究,考量了2000年前后左右美国大城市数千个生孕宝宝的家中的各类日常生活結果。这也是科学研究工作人员可以用的较充足的数据之一:它伴随着時间的变化跟踪了数千个家中,并被用以750数篇科技论文中。

The results were disappointing. Even the best performing prediction models were only marginally better than random guesses. The models were rarely able to predict a student’s GPA, for example, and they were even worse at predicting whether a family would get evicted, experience unemployment, or face material hardship. And the models gave almost no insight into how resilient a child would become.

結果让人心寒。即使是主要表现较好的预测模型也只比任意猜想好一点点。比如,这种实体模型非常少可以预测分析一个学员的平均学分绩点,而在预测分析一个家中是不是会被驱赶、是不是会历经下岗或是不是会遭遇成分艰难层面,他们的主要表现更差。并且这种模式几乎沒有洞悉到一个孩子会越来越多么的有延展性。

In other words, even having access to incredibly detailed data and modern machine learning methods designed for prediction did not enable the researchers to make accurate forecasts.

换句话说,即使拥有让人难以想象的详尽数据信息和为预测分析而制定的当代人工神经网络方式,科学研究员工也没法作出***的预测分析。

Of course, machine learning systems may be much more accurate in other domains; this paper studied the predictability of life outcomes in only one setting. But the failure to make accurate predictions cannot be blamed on the failings of any particular analyst or method. Hundreds of researchers attempted the challenge, using a wide range of statistical techniques, and they all failed.

自然,人工神经网络系统软件在其它行业机会更***;文中仅在一种情形下研究了生活結果的可预测性。但无法做出***预测分析,不可以归咎于某一投资分析师或方式的不成功。百余名研究工作人员试着了此项试炼,应用了多种多样的统计分析技术性,但都以不成功告终。

These findings suggest that we should doubt that “big data” can ever perfectly predict human behavior — and that policymakers working in criminal justice policy and child-protective services should be especially cautious. Even with detailed data and sophisticated prediction techniques, there may be fundamental limitations on researchers’ ability to make accurate predictions. Human behavior is inherently unpredictable, social systems are complex, and the actions of individuals often defy expectations.

这种发觉表明,大家应当猜疑“互联网大数据”是不是可以***地预测分析人们活动——而从业刑事案件司法部门现行政策和少年儿童维护服务项目的现行政策实施者应当尤其慎重。即使拥有详尽的信息和繁琐的预测分析技术性,研究工作人员做出***分析的水平也有可能存有压根的限定。人们个人行为在实质上是不可以预估的,社会发展系统软件是错综复杂的,并且本人的个人行为通常违反预估。

And yet disappointing as this may be for technocrats and data scientists, it also suggests something reassuring about human potential. If life outcomes are not firmly pre-determined — if an algorithm, given a set of past data points, cannot predict a person’s trajectory — then the algorithm’s limitations ultimately reflect the richness of humanity’s possibilities.

殊不知,虽然这很有可能让技术性执政论者和大数据工程师觉得心寒,但它也令人对人们的发展潜力觉得安心。假如生活結果并不是可以死死地预先确定的——假如一个优化算法在给出一组以往的数据信息点的根基上,不可以预测分析一个人的运动轨迹——那麼这一优化算法的局限终究会体现了人们丰富多彩的概率。

 

语汇:

1、engulf [ɪnˈɡʌlf] vt. 淹没;吞噬,囫囵吞枣

2、plague [pleɪɡ] n. 疫情,传染病;灾难,自然灾害;烦心事,讨厌的人;愿老天爷降灾祸于……(用以咒骂) v. 使摧残,使苦恼;纠缠不清,缠扰;促使灾难

3、episode [ˈepɪsoʊd] n. 一段历经;主题曲;一段剧情;(电视剧或广播节目的)一集;插嘴;有意思的事情

4、pitfall [ˈpɪtfɔːl] n. 圈套,陷阱;缺点;引诱

5、perpetuate [pərˈpetʃueɪt] v. 使不断,使永存,使***化(尤指不太好的事情)

6、entrench [ɪnˈtrentʃ] vt. 建立,坚固;用壕沟围起来;发掘 vi. 侵害;发掘壕沟

7、evict [ɪˈvɪkt] vt. 驱赶;赶出

综上所述是-2023英语考研同宗外刊文章内容:优化算法很有可能始终不容易真真正正弄搞清楚人们,期待对复习2023研究生考试的小伙子们有一定的协助!恭祝学生2023研究生考试凯旋而归!


温馨提示:如果你对【 2023考研英语同源外刊文章:优化算法很有可能始终不容易真真正正弄明白人们】问题不是很了解,还有什么疑问,请及时咨询在线老师