The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.
College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.
I think that's actually a good idea? Sucks for e-learning as a whole, but I always found online exams (and also online interviews) to be very easy to game.
My handwriting has always been terrible. It was a big issue in school until I was able to turn in printed assignments.
Like with a lot of school things, they do a shit thing without thinking about negative effects. They always want a simple solution to a complex problem.
I did my undergrad 2008-2012, we had zero online exams. Every exam was in person and hand written. People with disabilities were accommodated, usually with extra writing time for those that need it, or a separate room with a writer for you to narrate to.
It's really not a terrible issue, and something universities have been able to deal with for centuries.
Handwriting an essay means I’m giving 90% of my energy and time to drawing ugly squiggles and 10% to making a sensible argument. If I’m allowed to use a computer, it’s 99% sensible argument and 1% typing. Surely this will not have any impact on the quality of the text the teachers have to read…
Our job is to evaluate YOUR ability; and AI is a great way to mask poor ability. We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.
I am not arguing exams are perfect mind, but I'd rather doubt a few student's inability (maybe it was just a bad exam for them) than always doubt their ability (is any of this their own work).
Case in point, ALL students on my course with low (<60%) attendance this year scored 70s and 80s on the coursework and 10s and 20s in the OPEN BOOK exam. I doubt those 70s and 80s are real reflections of the ability of the students, but do suggest they can obfuscate AI work well.
Here's a somewhat tangential counter, which I think some of the other replies are trying to touch on ... why, exactly, continue valuing our ability to do something a computer can so easily do for us (to some extent obviously)?
In a world where something like AI can come up and change the landscape in a matter of a year or two ... how much value is left in the idea of assessing people's value through exams (and to be clear, I'm saying this as someone who's done very well in exams in the past)?
This isn't to say that knowing things is bad or making sure people meet standards is bad etc. But rather, to question whether exams are fit for purpose as means of measuring what matters in a world where what's relevant, valuable or even accurate can change pretty quickly compared to the timelines of ones life or education. Not long ago we were told that we won't have calculators with us everywhere, and now we could have calculators embedded in our ears if wanted to. Analogously, learning and examination is probably being premised on the notion that we won't be able to look things up all the time ... when, as current AI, amongst other things, suggests, that won't be true either.
An exam assessment structure naturally leans toward memorisation and being drilled in a relatively narrow band of problem solving techniques,1 which are, IME, often crammed prior to the exam and often forgotten quite severely pretty soon afterward. So even presuming that things that students know during the exam are valuable, it is questionable whether the measurement of value provided by the exam is actually valuable. And once the value of that information is brought into question ... you have to ask ... what are we doing here?
Which isn't to say that there's no value created in doing coursework and cramming for exams. Instead, given that a computer can now so easily augment our ability to do this assessment, you have to ask what education is for and whether it can become something better than what it is given what are supposed to be the generally lofty goals of education.
In reality, I suspect (as many others do) that the core value of the assessment system is to simply provide a filter. It's not so much what you're being assessed on as much as your ability to pass the assessment that matters, in order to filter for a base level of ability for whatever professional activity the degree will lead to. Maybe there are better ways of doing this that aren't so masked by other somewhat disingenuous goals?
Beyond that there's a raft of things the education system could emphasise more than exam based assessment. Long form problem solving and learning. Understanding things or concepts as deeply as possible and creatively exploring the problem space and its applications. Actually learn the actual scientific method in practice. Core and deep concepts, both in theory and application, rather than specific facts. Breadth over depth, in general. Actual civics and knowledge required to be a functioning member of the electorate.
All of which are hard to assess, of course, which is really the main point of pushing back against your comment ... maybe we're approaching the point where the cost-benefit equation for practicable assessment is being tipped.
In my experience, the best means of preparing for exams, as is universally advised, is to take previous or practice exams ... which I think tells you pretty clearly what kind of task an exam actually is ... a practiced routine in something that narrowly ranges between regurgitation and pretty short-form, practiced and shallow problem solving.
So, a calculator is a great shortcut, but it's useless for most mathematics (i.e. proof!). A lot of people assume that having a calculator means they do not need to learn mathematics - a lot of people are dead wrong!
In terms of exams being about memory, I run mine open book (i.e. students can take pre-prepped notes in). Did you know, some students still cram and forget right after the exams? Do you know, they forget even faster for courseworks?
Your argument is a good one, but let's take it further - let's rebuild education towards an employer centric training system, focusing on the use of digital tools alone. It works well, productivity skyrockets, for a few years, but the humanities die out, pure mathematics (which helped create AI) dies off, so does theoretical physics/chemistry/biology. Suddenly, innovation slows down, and you end up with stagnation.
Rather than moving us forward, such a system would lock us into place and likely create out of date workers.
At the end of the day, AI is a great tool, but so is a hammer and (like AI today), it was a good tool for solving many of the problems of its time. However, I wouldn't want to only learn how to use a hammer, otherwise how would I be replying to you right now?!?
I think a central point you're overlooking is that we have to be able to assess people along the way. Once you get to a certain point in your education you should be able to solve problems that an AI can't. However, before you get there, we need some way to assess you in solving problems that an AI currently can. That doesn't mean that what you are assessed on is obsolete. We are testing to see if you have acquired the prerequisites for learning to do the things an AI can't do.
Here's a somewhat tangential counter, which I think some of the other replies are trying to touch on ... why, exactly, continue valuing our ability to do something a computer can so easily do for us (to some extent obviously)?
My theory prof said there would be paper exams next year. Because it's theory. You need to be able to read an academic paper and know what theoretical basis the authors had for their hypothesis. I'm in liberal arts/humanities. Yes we still exist, and we are the ones that AI can't replace. If the whole idea is that it pulls from information that's already available, and a researcher's job is to develop new theories and ideas and do survey or interview research, then we need humans for that. If I'm trying to become a professor/researcher, using AI to write my theory papers is not doing me or my future students any favors. Ststistical research on the other hand, they already use programs for that and use existing data, so idk. But even then, any AI statistical analysis should be testing a new hypothesis that humans came up with, or a new angle on an existing one.
So idk how this would affect engineering or tech majors. But for students trying to be psychologists, anthropologists, social workers, professors, then using it for written exams just isn't going to do them any favors.
As they are talking about writing essays, I would argue the importance of being able to do it lies in being able to analyze a book/article/whatever, make an argument, and defend it. Being able to read and think critically about the subject would also be very important.
Sure, rote memorization isn't great, but neither is having to look something up every single time you ever need it because you forgot. There are also many industries in which people do need a large information base as close recall. Learning to do that much later in life sounds very difficult. I'm not saying people should memorize everything, but not having very many facts about that world around you at basic recall doesn't sound good either.
It's an interesting point.. I do agree memorisation is (and always has been) used as more of a substitute for actual skills. It's always been a bugbear of mine that people aren't taught to problem solve, just regurgitate facts, when facts are literally at our fingertips 24/7.
In my experience, the best means of preparing for exams, as is universally advised, is to take previous or practice exams … which I think tells you pretty clearly what kind of task an exam actually is … a practiced routine in something that narrowly ranges between regurgitation and pretty short-form, practiced and shallow problem solving.
You are getting some flak, but imho you are right. The only thing an exam really tests is how well you do in exams. Of course, educators dont want to hear that. But if you take a deep dive into (scientific) literature on the topic, the question "What are we actually measuring here?" is raised rightfully so.
In my experience, they love to give exams where it doesn't matter what notes you bring, you're on the same level whether you write down only the essential equations, or you copy down the whole textbook.
Case in point, ALL students on my course with low (<60%) attendance this year scored 70s and 80s on the coursework and 10s and 20s in the OPEN BOOK exam. I doubt those 70s and 80s are real reflections of the ability of the students
I get that this is a quick post on social media and only an antidote, but that is interesting. What do you think the connection is? AI, anxiety, or something else?
It's a tough one because I cannot say with 100% certainty that AI is the issue. Anxiety is definitely a possibility in some cases, but not all; perhaps thinking time might be a factor, or even just good old copying and then running the work through a paraphraser. The large amount of absenses also means it was hard to benchmark those students based on class assessment (yes, we are always tracking how you are doing in class, not tp judge you, but just in case you need some extra help!).
However, AI is a strong contender since the "open book" part didn't include the textbook, it allowed the students to take a booklet into the exams with their own notes (including fully worked examples). They scored low because they didn't understand their own notes, and after reviewing the notes they brought in (all word perfect), it was clear they did not understand the subject.
That sounds like AI. If you do your homework then even sitting in a regular exam you should score better than 20%. This exam being open book, it sounds like they were unfamiliar with the textbook and could not find answers fast enough.
Not the previous poster. I taught an introduction to programming unit for a few semesters. The unit was almost entirely portfolio based ie all done in class or at home.
The unit had two litmus tests under exam like conditions, on paper in class. We're talking the week 10 test had complexity equal to week 5 or 6. Approximately 15-20% of the cohort failed this test, which if they were up to date with class work effectively proved they cheated. They'd be submitting course work of little 2d games then on paper be unable to "with a loop, print all the odd numbers from 1 to 20"
Graduated a year ago, just before this AI craze was a thing.
I feel there's a social shift when it comes to education these days. It's mostly: "do 500 - 1,000 word essay to get 1.5% of your grade". The education doesn't matter anymore, the grades do; if you pick something up along the way, great! But it isn't that much of a priority.
I think it partially comes from colleges squeezing students of their funds, and indifferent professors who just assign busywork for the sake of it. There are a lot of uncaring professors that just throw tons of work at students, turning them back to the textbook whenever they ask questions.
However, I don't doubt a good chunk of students use AI on their work to just get it out of the way. That really sucks and I feel bad for the professors that actually care and put effort into their classes. But, I also feel the majority does it in response to the monotonous grind that a lot of other professors give them.
I recently finished my degree, and exam-heavy courses were the bane of my existence. I could sit down with the homework, work out every problem completely with everything documented, and then sit to an exam and suddenly it's "what's a fluid? What's energy? Is this a pencil?"
The worst example was a course with three exams worth 30% of the grade, attendance 5% and homework 5%. I had to take the course twice; 100% on HW each time, but barely scraped by with a 70.4% after exams on the second attempt. Courses like that took years off my life in stress. :(
We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.
Could you ever though, when giving them work they had to do not in your physical presence? People had their friends, parents or ghostwriters do the work for them all the time. You should know that.
This is not an AI problem, AI "just" made it far more widespread and easier to access.
"Sometimes" would be my answer. I caught students who colluded during online exams, and even managed to spot students pasting directly from an online search. Those were painful conversations, but I offered them resits and they were all honest and passed with some extra classes.
In the real world, will those students be working from a textbook, or from a browser with some form of AI accessible in a few years?
What exactly is being measured and evaluated? Or has the world changed, and existing infrastructure is struggling to cling to the status quo?
Were those years of students being forced to learn cursive in the age of the computer a useful application of their time? Or math classes where a calculator wasn't allowed?
I can hardly think just how useful a programming class where you need to write it on a blank page of paper with a pen and no linters might be, then.
Maybe the focus on where and how knowledge is applied needs to be revisited in light of a changing landscape.
For example, how much more practically useful might test questions be that provide a hallucinated wrong answer from ChatGPT and then task the students to identify what was wrong? Or provide them a cross discipline question that expects ChatGPT usage yet would remain challenging because of the scope or nuance?
I get that it's difficult to adjust to something that's changed everything in the field within months.
But it's quite likely a fair bit of how education has been done for the past 20 years in the digital age (itself a gradual transition to the Internet existing) needs major reworking to adapt to changes rather than simply oppose them, putting academia in a bubble further and further detached from real world feasibility.
If you're going to take a class to learn how to do X, but never actually learn how to do X because you're letting a machine do all the work, why even take the class?
In the real world, even if you're using all the newest, cutting edge stuff, you still need to understand the concepts behind what you're doing. You still have to know what to put into the tool and that what you get out is something that works.
If the tool, AI, whatever, is smart enough to accomplish the task without you actually knowing anything, what the hell are you useful for?
As an anecdotal though, I once saw someone simply forwarding (ie. copy and pasting) their exam questions to ChatGPT. His answers are just ChatGPT responses, but paraphrased to make it look less GPT-ish. I am not even sure whether he understood the question itself.
In this case, the only skill that is tested... is English paraphrasing.
I'll field this because it does raise some good points:
It all boils down to how much you trust what is essentially matrix multiplication, trained on the internet, with some very arbitrarily chosen initial conditions. Early on when AI started cropping up in the news, I tested the validity of answers given:
For topics aimed at 10--18 year olds, it does pretty well. It's answers are generic, and it makes mistakes every now and then.
For 1st--3rd year degree, it really starts to make dangerous errors, but it's a good tool to summarise materials from textbooks.
Masters+, it spews (very convincing) bollocks most of the time.
Recognising the mistakes in (1) requires checking it against the course notes, something most students manage. Recognising the mistakes in (2) is often something a stronger student can manage, but not a weaker one. As for (3), you are going to need to be an expert to recognise the mistakes (it literally misinterpreted my own work at me at one point).
The irony is, education in its current format is already working with AI, it's teaching people how to correct the errors given. Theming assessment around an AI is a great idea, until you have to create one (the very fact it is moving fast means that everything you teach about it ends up out of date by the time a student needs it for work).
However, I do agree that education as a whole needs overhauling. How to do this: maybe fund it a bit better so we're able to hire folks to help develop better courses - at the moment every "great course" you've ever taken was paid for in blood (i.e.
50 hour weeks teaching/marking/prepping/meeting arbitrary research requirement).
Sorry but it was never about OUR abilility in the firts place.
In my country exams are old, outdated and often way to hard. In my country all classes are outdated and way to hard. It often feels that we are stucked in the middle of the 20th century.
You have no change when you have a disability. When you have kids, parents to take care of. Or hell: you have to work, cause you can't effort university otherwise.
So i can totaly understand why students feel the need to use AI to survive that torture. I don't feel sorry for an outdated university system.
When it is about OUR abilility, then create a System that is for students and their needs.
Millennial here, haven't had to seriously write out anything consistently in decades at this point. There's no way their handwriting can be worse than mine and still be legible lol.
Last week of school i found out my history teacher took all my handwritten things too the language teacher and had her copy it into legibility i felt so bad for that lady.
I block print and vary caps and lowercase fairly randomly. I have particular trouble with the number 5. I guess it’s legible, but it sure ain’t pretty. It’s also fucking torture, and I would walk right out of school if this were done to me. Oh yeah, I’m Gen X.
has led some college professors to reconsider their lesson plans for the upcoming fall semester.
I'm sure they'll write exams that actually require an actual understanding of the material rather than regurgitating the seminar PowerPoint presentations as accurately as possible...
My favourite lecturer at uni actually did that really well. He also said the exam was small and could be done in about an hour or two but gave us a 3 hour timeslot because he said he wanted us to take our time and think about each problem carefully. That was a great class.
Norway has been pushing digital exams for quite a few years, to the point where high school exams went to shit for lots of people this year because the system went down and they had no backup (who woulda thought?). In at least some universities most of or all exams have been digital for a couple years.
I think this is largely a bad idea, especially on engineering exams, or any exam where you need to draw/sketch or write equations. For purely textual exams, it's fine. This has also lead to much more multiple-choice or otherwise automatically corrected questions, which the universities explicitly state is a way of cutting costs. I think that's terrible, nothing at university level should be reduced to a multiple choice question. They should be forbidden.
The university I went to explicitly did in person written exams for pretty much all exams specifically for anti-cheating (even before the age of ChatGPT). Assignments would use computers and whatnot, but the only way to reliably prevent cheating is to force people to write the exams in carefully controlled settings.
Honestly, probably could have still used computers in controlled settings, but pencil and paper is just simpler and easier to secure.
One annoying thing is that this meant they also usually viewed assignments as untrusted and thus not worth much of the grade. You'd end up with assignments taking dozens of hours but only worth, say, 15% of your final grade. So practically all your grade is on a couple of big, stressful exams. A common breakdown I had was like 15% assignments, 15% midterm, and 70% final exam.
This isn't exactly novel. Some professors allow a cheat sheet. But that just means that the exam will be harder.
Physics exam that allows a cheat sheet asks you to derive the law of gravity. Well, OK, you write the answer at the bottom pulled from you cheat sheet. Now what? If you recall how it was originally created you probably write Newtons three laws at the top of your paper... And then start doing some math.
Calculus exam that let's you use wolfram alpha? Just a really hard exam where you must show all of your work.
Now, with ChatGPT, it's no longer enough to have a take home essay to force students to engage with the material, so you find news ways to do so. Written, in person essays are certainly a way to do that.
Hate to break it to you, but you picked probably the one law in physics that is empirically derived. There is no mathematical equation to derive newton's law of gravity.
Yes but you can still start with Kepler and newton's three laws and with basic math skills recreate the equation. I know, because it was on a physics exam I took ten years ago.
Obviously that is the next step for the technically inclined, but even the less inclined may be capable of generating them copying to save time and brain effort.
When I was in College for Computer Programming (about 6 years ago) I had to write all my exams on paper, including code. This isn't exactly a new development.
He's not pointing out that handwritten tests are not something new, but that using handwritten tests over typing them to reflect the student's actual abilities is not new.
I had some teachers ask for handwritten programming exams too (that was more like 20 years ago for me) and it was just as dumb then as it is today. What exactly are they preparing students for? No job will ever require the skill of writing code on paper.
Same. All my algorithms and data structures courses in undergrad and grad school had paper exams. I have a mixed view on these but the bottom line is that I'm not convinced they're any better.
Sure they might reflect some of the student's abilities better, but if you're an evaluator interested in assessing student's knowledge a more effective way is to make directed questions.
What ends up happening a lot of times are implementation questions that ask from the student too much at once: interpretation of the problem; knowledge of helpful data structures and algorithms; abstract reasoning; edge case analysis; syntax; time and space complexities; and a good sense of planning since you're supposed to answer it in a few minutes without the luxury and conveniences of a text editor.
This last one is my biggest problem with it. It adds a great deal of difficulty and stress without adding any value to the evaluator.
You can still have AI write the paper and you copy it from text to paper. If anything, this will make AI harder to detect because it's now AI + human error during the transferring process rather than straight copying and pasting for students.
This thinking just feels like moving in the wrong direction. As an elementary teacher, I know that by next year all my assessments need to be practical or interview based. LLMs are here to stay and the quicker we learn to work with them the better off students will be.
And forget about having any sort of integrity or explaining to kids why it's important for them to know how to do shit themselves instead of being wholly dependent on corporate proprietary software whose accessibility can and will be manipulated to serve the ruling class on a whim 🤦
You ask them any mundane question and they just shrug, and if you press them they pull out their phone to check.
It's important that we do math so that we develop a sense of numeracy. By the same token it's important that we write because it teaches us to organize our thoughts and communicate.
These tools will destroy the quality of education for the students that need it the most if we don't figure out how to reign in their use.
If you want to plug your quarterly data into GPT to generate a projection report I couldn't care less. But for your 8th grade paper on black holes, write it your damn self.
In what ways do you envision working with LLMs as an educator of children?
I have used ChatGPT to explain to myself a number of fairly advanced technical and programming concepts; I work in Animation through my own self-study and some good luck, so I'm constantly trying to up my skills in the math that relates to it. When I come up against a math or C++ term or concept that I do not currently understand, I can generally get a pretty good conceptual understanding of it by working with ChatGPT.
So at one point I wanted to understand what Linear Algebra specifically meant, and it didn't stick but I do remember asking it to expand on things it said that weren't clear, and it was able to competently do so. By asking many questions I was able - I think - to get clearer on a number of things which I doubt I ever would have learned, unless by luck I found someone who knows the math to teach me.
It also flubbed a lot of basic arithmetic, and I had to mentally look for and correct that.
This is useful to an autodidact like myself who has learned how to learn at a University level, to be sure.
I cannot, however, think of a single beneficial way to use this to educate small children with no such years of mental discipline and ability to understand that their teacher is neither a genius nor a moron, but rather, a machine that pumps out florid expressions of data that resemble other expressions of similar data.
Devise a physical problem that can be tested, have everyone in class pull a ChatGPT answer to it, have them read the answers out loud and vote on which one is right, then apply it to the physical version and see it fail. Show them how tweaking the answer just a bit solves the problem.
years of mental discipline and ability to understand that their teacher is neither a genius nor a moron
Ta-da! Just taught them that without all your years.
I cannot, however, think of a single beneficial way to use this to educate small children
Then you're not a teacher. Please don't ever teach small children.
It actually is artificial intelligence. What are you even arguing against man?
Machine learning is a subset of AI and neural networks are a subset of machine learning. Saying an LLM (based on neutral networks for prediction) isn't AI because you don't like it is like saying rock and roll isn't music
If AI was 'intelligent', it wouldn't have written me a set of instructions when I asked it how to inflate a foldable phone. Seriously, check my first post on Lemmy...
I am arguing against this marketing campaign, that's what. Who decides what "AI" is and how did we come to decide what fits that title? The concept of AI has been around a long time, like since the Greeks, and it had always been the concept of a made-made man. In modern times, it's been represented as a sci-fi fantasy of sentient androids. "AI" is a term with heavy association already cooked into it. That's why calling it "AI" is just a way to make it sound high tech futuristic dreams-come-true. But a predictive text algorithm is hardly "intelligence". It's only being called that to make it sound profitable. Let's stop calling it "AI" and start calling out their bullshit. This is just another crypto currency scam. It's a concept that could theoretically work and be useful to society, but it is not being implemented in such a way that lives up to its name.
Maybe machine learning models technically fit the definition of "algorithm" but it suits them very poorly. An algorithm is traditionally a set of instructions written by someone, with connotations of being high level, fully understood conceptually, akin to a mathematical formula.
A machine learning model is a soup of numbers that maybe does something approximately like what the people training it wanted it to do, using arbitrary logic nobody can expect to follow. "Algorithm" is not a great word to describe that.
I have disgraphia which makes hand writing really friggin difficult (and even painful!) for me, and makes everyone else's day worse if they have to try and understand what I've written. I need computers to be able to write shit.
Wouldn't it make more sense to find ways on how to utilize the tool of AI and set up criteria that would incorporate the use of it?
There could still be classes / lectures that cover the more classical methods, but I remember being told "you won't have a calculator in your pocket".
My point use, they should prepping students for the skills to succeed with the tools they will have available and then give them the education to cover the gaps that AI can't solve. For example, you basically need to review what the AI outputs for accuracy. So maybe a focus on reviewing output and better prompting techniques? Training on how to spot inaccuracies? Spotting possible bias in the system which is skewed by training data?
Training how to use "AI" (LLMs demonstrably possess zero actual reasoning ability) feels like it should be a seperate pursuit from (or subset of) general education to me. In order to effectively use "AI", you need to be able to evaluate its output and reason for yourself whether it makes any sense or simply bears a statitstical resemblance to human language. Doing that requires solid critical reasoning skills, which you can only develop by engaging personally with countless unique problems over the course of years and working them out for yourself. Even prior to the rise of ChatGPT and its ilk, there was emerging research showing diminishing reasoning skills in children.
Without some means of forcing students to engage cognitively, there's little point in education. Pen and paper seems like a pretty cheap way to get that done.
I'm all for tech and using the tools available, but without a solid educational foundation (formal or not), I fear we end up a society snakeoil users in search of the blinker fluid.
That's just what we tell kids so they'll learn to do basic math on their own. Otherwise you'll end up with people who can't even do 13+24 without having to use a calculator.
people who can’t even do 13+24 without having to use a calculator
More importantly, you end up with people who don't recognize that 13+24=87 is incorrect. Math->calculator is not about knowing the math, per se, but knowing enough to recognize when it's wrong.
I don't envy professors/teachers who are hacing to figure out novel ways of determining the level of mastery of a class of 30, 40, or 100 students in the era of online assistance. Because, really, we still need people who can turn out top level, accurate, well researched documentation. If we lose them, who will we train the next gen LLM on? ;-)
When will people need to do basic algebra in their head? The difficulty between 13+24 and 169+ 742 rises dramatically. Yeah it makes your life convenient if you can add simple numbers, but is it necessary when everyone has a calculator?
There are some universities looking at AI from this perspective, finding ways to teach proper usage of AI. Then building testing methods around the knowledge of students using it.
It's like the calculator in the 80s and 90s. Teacher would constantly tell us "no jobs just gonna let you use a calulator,, they're paying you to work"..
I graduated, and really thought companies were gonna make me do stuff by hand, cause calulators made it easy. Lol.
That's actually something that is done (PhD viva). If I had the budget to hire another 6 assistant profs to viva my 120 students, I'd probably do it for my module too!
We still have orals in smaller seminars, and for PhDs. As another poster said there's too many students in most courses to do it, but we absolutely do oral exams for smaller cohorts.
Isn't this kind of ableist? I remember when I was in school I had special accommodations to type instead of write, because I had wrists too weak to write legibly, but fingers fast enough to type expediently, they legitimately thought that I was a really stupid kid, until they realized that my spelling tests were not incorrect.
They just couldn't read that I had spelled it correctly. Somehow I wrote the word fly, and the teacher mistook my y for a v. I went from being the dumbest kid to the smartest kid as soon as the accommodation was put in place.
Universities have accomodation systems for issues like this. People with disabilities can go to the accomodations office and get what they need to be able to do the work
The best part is there are hand writing generating programs or even web pages that convert text to gcode allowing you to use a 3d printer to write things out. In theory it should be really hard to pass it off as being human written, let alone match your own writing, but I'm sure it will only get better. I think there are even models to try to match someone's writing.
No, but if I were still in school I would be extremely tempted to have it write out an essay instead of writing out pages. The only thing that kills it would be it obviously would not match my handwriting.
As an anecdote I used to do k-12 sysadmin and we once had a major issue during grade 12 English exam and the students had to handwrite their essays. Almost everyone failed. Most did not even finish. Most students can hardly even handwrite anymore and it makes sense
Our uni accepts 99% of things on paper (except some reports, which need to be done in Latex), and I am glad this is the case. I just can't think normally while at the screen. If there's a task - I need to do it on paper first and only then type out (which is a bit frustrating because I can only type with two fingers).
Chatgpt jest darmowy, więc wielu uczniów korzysta z niego, aby w prosty sposób odrabiać zadania domowe. Potrzebujemy ściślejszej kontroli w tej kwestiiChatgpt jest darmowy, więc wielu uczniów korzysta z niego, aby w prosty sposób odrabiać zadania domowe. Potrzebujemy ściślejszej kontroli w tej kwestii
Essay writing was always my Achilles' heel until I discovered a professional writing service online. Hiring their team of skilled writers has completely transformed my approach to assignments. Now, every nursing essay examples I submit is crafted to perfection, which has notably boosted my academic standing. This service has not only provided me with high-quality essays but has also given me peace of mind and the freedom to focus on other critical aspects of my studies.
AI is a tool that can indeed be of great benefit when used properly. But using it without comprehending and verifying the source material can be downright dangerous (like those lawyers citing fake cases). The point of the essay/exam is to test comprehension of the material.
Using AI at this point is like using a typewriter in a calligraphy test, or autocorrect in a spelling and grammar test.
Although asking for handwritten essays does nothing to combat use of AI. You can still generate content and then transcribe it by hand.
We need to teach people to work with technology. Not pretend it doesnt exist. When these models came out, the world changed. If you arent using them right now you are being left behind
AI in its current form is equivalent to the advent of the typewriter.
It's really not. Everything that you write with a typewriter is going to be your own words and your own thoughts. I wouldn't even consider the calculator comparison to be valid. A calculator will spit out an objective truth. AI tools like chatgpt will formulate complex responses to how you prompt it. It adds context, information, and analysis in a way that a typewriter or calculator doesn't. The whole point of school is to learn critical thinking skills and telling chatgpt to write an essay for you based on the assignment criteria will not achieve that goal in any meaningful way.
That being said, there are valid ways to use AI to assist in the learning process but exams are meant to verify your personal understanding of the material.
Only the ones too dumb to incorporate ai usage into their work and grade accordingly. Going to be a load of kids who aren't just missing out a learning how to best use modern tools but who have wasted their time learning obsolete skills.
Thankfully those kids will be able to get a proper education from AI soon.
If your exams can be solved by AI, then your exams are not good enough. How to get around this? Simple. Oral exams aka viva voce. Anyone who had defended their thesis knows the pants shitting terror this kind of exam does to you. It will take longer but you can truly determine how well the student understands the content.
Nah i just think of the people like me, that literally can't write with the hand, and no it's not a education issue, i have a motoric impairment in my hands.