A small tool to view real-world ActivityPub objects as JSON! Enter a URL
or username from Mastodon or a similar service below, and we'll send a
request with
the right
Accept
header
to the server to view the underlying object.
{
"@context": [
"https://www.w3.org/ns/activitystreams",
{
"ostatus": "http://ostatus.org#",
"atomUri": "ostatus:atomUri",
"inReplyToAtomUri": "ostatus:inReplyToAtomUri",
"conversation": "ostatus:conversation",
"sensitive": "as:sensitive",
"toot": "http://joinmastodon.org/ns#",
"votersCount": "toot:votersCount",
"Hashtag": "as:Hashtag"
}
],
"id": "https://fediphilosophy.org/users/anteagle/statuses/113512268463566046",
"type": "Note",
"summary": "Long rant about AI in assessment",
"inReplyTo": null,
"published": "2024-11-19T23:42:40Z",
"url": "https://fediphilosophy.org/@anteagle/113512268463566046",
"attributedTo": "https://fediphilosophy.org/users/anteagle",
"to": [
"https://www.w3.org/ns/activitystreams#Public"
],
"cc": [
"https://fediphilosophy.org/users/anteagle/followers"
],
"sensitive": true,
"atomUri": "https://fediphilosophy.org/users/anteagle/statuses/113512268463566046",
"inReplyToAtomUri": null,
"conversation": "tag:fediphilosophy.org,2024-11-19:objectId=5891781:objectType=Conversation",
"content": "<p>I was fairly laissez-faire about AI in assessment until recently. I thought that if you showed trust in students, and designed engaging assessments, then at least the good students would write their own work. But at my institution the past two years has shown a dramatic transformation. The number of academic integrity cases has increased 80%, and in my Faculty 'inappropriate use of AI' now comprises 65% of all cases; traditional plagiarism has shrunk from >60% of all cases in in 2022 to 10% in 2024. These are only reported cases, i.e., those where instructors had sufficient evidence to bother to go to an academic integrity investigation, so is likely to be a massive undercount, given how hard it is to find solid evidence of use of chatbots to write. I'm now convinced that among our students it is the general rule to use gen AI tools to do a significant amount of what writing exercises are supposed to encourage students to do.</p><p>Our institution isn't much help. It is (perhaps following the example of its senior management, who recently admitted by show of hands in a public forum that most of them are heavy users of gen AI in their work) keen to involve AI in assessment. A recently released policy document states that the only assessments in which 'students rely solely on their knowledge, understanding, and skills' – their words! – are to be in person exams and other AI-proof assessments. In all other assignment types, including essays, our students *must* be permitted to involve AI to some extent. </p><p>This seems to me nuts. What exactly are we supposed to be assessing under this policy – which of our students can afford premium Claude subscriptions? Spurious claims are made that we need 'authentic' assessment and students will encounter these tools in the workforce – well, yes, probably their bosses will force this on them. Two points. (1) Using an AI chatbot isn't exactly rocket science; the much-touted future career of 'prompt engineer' will, I am confident, never eventuate. (2) Students who can understand, organise, and synthesize written information are better users of these tools – better able to see their weakness and detect problematic output. I know of no argument that would suggest being a competent writer and critical thinker will make you worse at using AI-'enhanced' tools, except maybe you will be less willing to sign off on arbitrary bullshit.</p><p>So I am going back to in person exams. I will make them open book/open resource, and I will distribute questions in advance (probably I will distribute 30 questions and say '10 of these will be on the exam' or something). But I want students to be doing their own work. I won't make it especially high stakes; this isn't going to be the Oxford model. But enough to make it count itself, and incentivise students to study and so make it a sufficiently reliable validation of their other assignment submissions. I honestly didn't think I would go back to F2F exams, but if we are in the credentialing business, as our regulator thinks we are, then I don't see any other way to reliably associate the work I'm grading with the students who are submitting. </p><p><a href=\"https://fediphilosophy.org/tags/AI\" class=\"mention hashtag\" rel=\"tag\">#<span>AI</span></a> <a href=\"https://fediphilosophy.org/tags/AIHype\" class=\"mention hashtag\" rel=\"tag\">#<span>AIHype</span></a> <a href=\"https://fediphilosophy.org/tags/LLMs\" class=\"mention hashtag\" rel=\"tag\">#<span>LLMs</span></a> <a href=\"https://fediphilosophy.org/tags/Education\" class=\"mention hashtag\" rel=\"tag\">#<span>Education</span></a> <a href=\"https://fediphilosophy.org/tags/Assessment\" class=\"mention hashtag\" rel=\"tag\">#<span>Assessment</span></a></p>",
"contentMap": {
"en": "<p>I was fairly laissez-faire about AI in assessment until recently. I thought that if you showed trust in students, and designed engaging assessments, then at least the good students would write their own work. But at my institution the past two years has shown a dramatic transformation. The number of academic integrity cases has increased 80%, and in my Faculty 'inappropriate use of AI' now comprises 65% of all cases; traditional plagiarism has shrunk from >60% of all cases in in 2022 to 10% in 2024. These are only reported cases, i.e., those where instructors had sufficient evidence to bother to go to an academic integrity investigation, so is likely to be a massive undercount, given how hard it is to find solid evidence of use of chatbots to write. I'm now convinced that among our students it is the general rule to use gen AI tools to do a significant amount of what writing exercises are supposed to encourage students to do.</p><p>Our institution isn't much help. It is (perhaps following the example of its senior management, who recently admitted by show of hands in a public forum that most of them are heavy users of gen AI in their work) keen to involve AI in assessment. A recently released policy document states that the only assessments in which 'students rely solely on their knowledge, understanding, and skills' – their words! – are to be in person exams and other AI-proof assessments. In all other assignment types, including essays, our students *must* be permitted to involve AI to some extent. </p><p>This seems to me nuts. What exactly are we supposed to be assessing under this policy – which of our students can afford premium Claude subscriptions? Spurious claims are made that we need 'authentic' assessment and students will encounter these tools in the workforce – well, yes, probably their bosses will force this on them. Two points. (1) Using an AI chatbot isn't exactly rocket science; the much-touted future career of 'prompt engineer' will, I am confident, never eventuate. (2) Students who can understand, organise, and synthesize written information are better users of these tools – better able to see their weakness and detect problematic output. I know of no argument that would suggest being a competent writer and critical thinker will make you worse at using AI-'enhanced' tools, except maybe you will be less willing to sign off on arbitrary bullshit.</p><p>So I am going back to in person exams. I will make them open book/open resource, and I will distribute questions in advance (probably I will distribute 30 questions and say '10 of these will be on the exam' or something). But I want students to be doing their own work. I won't make it especially high stakes; this isn't going to be the Oxford model. But enough to make it count itself, and incentivise students to study and so make it a sufficiently reliable validation of their other assignment submissions. I honestly didn't think I would go back to F2F exams, but if we are in the credentialing business, as our regulator thinks we are, then I don't see any other way to reliably associate the work I'm grading with the students who are submitting. </p><p><a href=\"https://fediphilosophy.org/tags/AI\" class=\"mention hashtag\" rel=\"tag\">#<span>AI</span></a> <a href=\"https://fediphilosophy.org/tags/AIHype\" class=\"mention hashtag\" rel=\"tag\">#<span>AIHype</span></a> <a href=\"https://fediphilosophy.org/tags/LLMs\" class=\"mention hashtag\" rel=\"tag\">#<span>LLMs</span></a> <a href=\"https://fediphilosophy.org/tags/Education\" class=\"mention hashtag\" rel=\"tag\">#<span>Education</span></a> <a href=\"https://fediphilosophy.org/tags/Assessment\" class=\"mention hashtag\" rel=\"tag\">#<span>Assessment</span></a></p>"
},
"updated": "2024-11-19T23:43:47Z",
"attachment": [],
"tag": [
{
"type": "Hashtag",
"href": "https://fediphilosophy.org/tags/ai",
"name": "#ai"
},
{
"type": "Hashtag",
"href": "https://fediphilosophy.org/tags/aihype",
"name": "#aihype"
},
{
"type": "Hashtag",
"href": "https://fediphilosophy.org/tags/LLMs",
"name": "#LLMs"
},
{
"type": "Hashtag",
"href": "https://fediphilosophy.org/tags/education",
"name": "#education"
},
{
"type": "Hashtag",
"href": "https://fediphilosophy.org/tags/assessment",
"name": "#assessment"
}
],
"replies": {
"id": "https://fediphilosophy.org/users/anteagle/statuses/113512268463566046/replies",
"type": "Collection",
"first": {
"type": "CollectionPage",
"next": "https://fediphilosophy.org/users/anteagle/statuses/113512268463566046/replies?only_other_accounts=true&page=true",
"partOf": "https://fediphilosophy.org/users/anteagle/statuses/113512268463566046/replies",
"items": []
}
}
}