Experiencing with AI generated text this week was honestly very interesting. This assignment really pushed me to think about how I use AI tools in real life, and how much I should be trusting them. This exercise allowed me to generate an original augmented post using AI and then being able to revise and evaluate it. This is a link to my Google doc, and the tool I used to generate the original version which was ChatGPT.
I think one of the biggest things I learned is that AI can produce content almost immediately, however this doesn’t mean it is always accurate. The original augmented post was very polished, but when I read the whole thing, I noticed points that were very vague and repetitive. AI definitely struggles with capturing depth, and I feel like the writing style wants to be very proper, and structured. To me it just makes it seem like it doesn’t flow very naturally.
This assignment also showed me that conducting research with an AI is very different from traditional research. Basically, the whole concept of gathering information from verified sources, which is what we do (humans) is taken over from responses based on patterns in data. AI isn’t able to verify facts the way we could, so it gathers what it can correct or not. This just made me realize that you have to be a lot more careful because what you are reading might not even be from a verified source.
Overall, I think AI can be helpful when it comes to brainstorming, but this is something that I will never fully trust. I have always believed that AI cannot replicate a human voice or thoughts, and I don’t see myself thinking differently anytime soon. Yes, it can be an accessible, helpful tool, but it is definitely not a replacement for critical thinking. I think this assignment did a good job at helping me understand the strengths of AI, but also the limitations that follow close behind.
Leave a comment