Whenever we talk about AI’s applications, we tend to doubt the reality of AI proliferation in education. The advancements and new research don’t talk much about applications in education, and Varun Arora, founding director of Open Curriculum, has some ideas about why this might be the case.
AI hasn’t been able to solve education problems because teachers and machine learning developers don’t always speak the same language. Even with talking about the same issues and looking to the same outcomes, the general language schema is sometimes a barrier. For Arora, AI could help solve some of the most persistent education problems, but first, we’ll have to overcome some of the hurdles.
Reconciliation of Competency Representations
When we’re training a supervised model, our desired outcome is that it improves accuracy. However, a consensus of what the outcome should be in education isn’t always apparent. The success marker is challenging to pin down.
In an ideal world, educators are using statements beginning with “students will be able to” or some variation, and using those as benchmark competencies. These seem clear in a vacuum, but in real-world learning, the statements don’t always have straightforward outcomes.
For example, human consensus on what success looks like for a simple word like “analyze the structure of a text” is inconsistent. Global consensus on what that analysis looks like doesn’t exist. In the case of analyzing input for “hotdog/not a hotdog” style apps, this success is clear, but for each common core standard in education, the rabbit hole for what success means gets deeper.
Two Questions: Can we bypass representations altogether
Can we create custom taxonomies that are more graph friendly?
The answer: For algorithms to work, they must account for how the teacher interprets success rather than the ideal lab situation version of success.
Codification of Curriculum and Pedagogy
AI needs to identify what good curriculum and teaching look like. Unfortunately, it’s a difficult problem to measure. Data scientists often head to the “data points” of good teaching, wanting to quantify or vectorize it. Instead, the messy nature of what excellent pedagogy means can be a moving target.
You’d have to build a taxonomy. Where is the teacher? What is he or she doing? What are the students doing? How do we label each chunk of what happens in the class? We don’t have a good answer for that.
On top of that is whether teachers themselves are even concerned with this problem. The good news is teachers are gaining an appreciation of codification. US schools already require students to map curriculum, but there are artificial barriers to coming up with a common language to describe every move. The US, unfortunately, has this language, but it’s all copyrighted. Teaching method? Copyrighted. Educational benchmark? Copyrighted. Learning concept? Yes, copyrighted.
The answer: Figure out how to go one step below the codification issues that bump up against copyright.
Economic Incentives to Deep AI Research
Funding is grim. Academic communities aren’t sure how to incentivize this research beyond things like personalized AI tutors that students can pay for. Some mutually beneficial relationships can happen in education research:
- Problem-Domain targeted: for example, robotics and long term rewards or natural language plus knowledge retrieval.
- Technique and performance targeted
What’s the justification for the investment? In education, it may not be immediately apparent what the incentives are for companies to undertake research in a specifically educational context. Certain companies are the exceptions to this rule, for example, ACT/ECT testing companies, but it’s largely an open problem.
Education companies could take notes from healthcare as far as AI momentum. There’s a universal consensus within the healthcare community, for example, to use data to find solutions to these problems and lower costs. They also agree on the immediacy of the impact, unlike the long term goals of education. And what drives these attitudes is the access to datasets, an area where education fields fall behind.
The answer: create an open AI research institute for education research.
Holistic Capturing of student performance
We have overly simplistic models about what’s happening in the student’s mind. In fact, this is a tough question for AI in general. We just don’t know what’s happening within the mind, so it’s tough to figure out how to assess not only competency but learning itself.
Our current models are highly binary, using knowledge components in conjunction with a series of competency exercises. It doesn’t consider what the student knows and where they are in their learning journeys. We require so much teacher training, but the tools they use have little training, modifications, or assessments.
We also have insufficient data because of privacy issues and data literacy. In one crucial initiative, Melinda Gates attempted to create an open data warehouse, but it failed completely because of privacy issues and skepticism.
The business model for educational products just isn’t there. The overturn for educational products keeps some businesses out of the field altogether, and ultimately, the decentralization of education makes this prohibitively difficult.
The answer: get creative with data acquisition.
[Related Article: Google Makes Free and Accessible AI Education for Everyone]
Efficient Machine Learning
Ultimately, the problem with AI in education is largely open because we’re still waiting for more efficient systems to emerge that will allow us to sample and train these models despite the constant lack of data. Once our systems are optimized for fuzzy logic and decision trees, we may have better luck answering the AI in education problem. Ultimately, we’re still waiting on more data-efficient systems.