It’s growing on me

Written by:

Pictured: Robot sitting, looking at a book, and possibly holding a pen. Photo by Andrea de Santis via Unsplash.

My first experience with a generative AI tool was when I created a graphic to accompany an article for the quarterly newsletter I compile for a Special Library Association chapter early last year. The article was about how to use ChatGPT for competitive intelligence research, and in this case, the author used it to locate internet resources about a particular company. Since the transcript wasn’t included, I made up a question or two in ChatGPT and used a screenshot as the accompanying image.

Since then, I’ve used AI to create images in Adobe Express and Photoshop for my Design of Technical Documents class last summer. Later last year, my company introduced an in-house image generator for employees to use. It wasn’t until last week when I started using ChatGPT for course assignments that I’ve spent considerable time experimenting with a text generator. I think I’m a convert to the idea that generative AI can be (or must be) a tool professionals use to improve our work, and not a just a tool that might take our jobs.

Will generative AI take writer’s jobs? I found Dr. Ryan Boettger’s approach to this issue in his paper “From Technical Communicator to AI Whisperer: The Rise of the Prompt Engineer” quite meaningful and I think it will impact how I approach genAI through the next phase of my career. We know that AI has not taken all the writing jobs just yet, and we also know it will be around for quite some time. As with any tool we can use to improve our work, the best thing we can do is become proficient at using it. It’s hard to imagine a future where technical communicators and other professionals won’t need to interact with these tools at all.

This week I discovered that my company has its own internal text generator (probably licensed from OpenAI or a company with a similar tool to ChatGPT). I was delighted when I first heard the suggestion that writers use genAI tools to help understand new concepts by asking the tool to explain them in simple terms. I first heard this discussed by Amruta Ranade in a video Dr. Kim assigned in my Content Strategies class. Several of this week’s course materials mention this usage as well, including Dr. Kim’s lecture, Dr. Boettger’s paper, and Ellis Pratt’s podcast episode. As a research librarian, it’s like I’m perpetually in the “pre-writing” phase, and because I assist engineers whose work is truly a mystery to me, I’m excited to try this out in my day-to-day work.

Though I started using genAI tools last year, most of my interaction with one was in the past week, when I used ChatGPT 4.0 to help evaluate the tone of voice assignments submitted by my classmates. I was so impressed with how ChatGPT was able to rewrite the prompt in a funny, casual, or formal tone of voice, that I didn’t immediately notice its propensity to exaggerate tone word prompts, as discussed by Taylor Dykes and others in this week’s materials. Or perhaps I noticed, but I trusted that it knew better than me somehow. This is probably due in part to my lack of experience consciously modifying my tone and partly because I’m new to genAI tools. Taylor’s examples of how genAI struggles with tone prompts really reminded me to question its outputs, and this will definitely influence my approach when using ChatGPT to assist with tone of voice through the rest of this course.

Leave a comment

Design a site like this with WordPress.com
Get started