Staying in Ethics and Legal with ChatGPT usage?
-
Mirriam Webster:
#########
plagiarized; plagiarizing
Synonyms of plagiarizeto steal and pass off (the ideas or words of another) as one's own - use (another's production) without crediting the source
##########
So if we look at how this wording matches the US copyright law... the AI is determined under law to not be a person. In order to plagiarize, one must take from another person, not from a tool or machine. So the copyright office has been super clear that humanity is a requirement in copyright ownership. Plagiarism, by the most common American dictionary, follows in kind. Seems black and white to me. Since the AI is the source of the text, unless it itself is plagiarizing (which it is supposed to be trained not to do), the concept of plagiarism cannot apply as it has the same burden of humanity as does copyright.
The logic that ChatGPT gives for why it suggests you would need to attribute it does not hold up to the definition and would be considered irrelevant. Which, of course, as a nascent AI, we expect a large degree of errors. And it is trained on a lot of questionable information.
So we can talk about appropriate or inappropriate uses of search engine or writing tooling, but it seems that copyright or plagiarism are clearly off of the table. Simple definition precludes them from any discussion involving AI.
-
@Obsolesce said in Staying in Ethics and Legal with ChatGPT usage?:
In any case, it is always important to use your best judgment and consult with a teacher, professor, or other authority on academic integrity if you have any doubts about whether or not your use of text generated through an AI language model could be considered plagiarism.
So basically it is saying you need to verify if your "authority" figure is being honest or just going to make up rules of their own and not abide by the English language. The question isn't whether the use is plagiarism, but whether a corrupt person will misuse the term for personal gain (e.g. professors looking for easy answers.)
As someone who has reported teaching staff for academic dishonest and been told that academic honest is only for students not for the university staff, I have little allowance for dishonest educators.
-
@scottalanmiller said in Staying in Ethics and Legal with ChatGPT usage?:
@Obsolesce said in Staying in Ethics and Legal with ChatGPT usage?:
Oh sweet, ChatGPT built into Edge now!
Literally on the phone talking about all the customers who have gotten infected by using Edge. It's the new attack vector. Most infections I've seen in a long time.
That sounds more like the kind of situation being those people would have gotten infected just the same regardless of web browser used. Latest version of the web browser prior to infection?
-
@Obsolesce said in Staying in Ethics and Legal with ChatGPT usage?:
That sounds more like the kind of situation being those people would have gotten infected just the same regardless of web browser used. Latest version of the web browser prior to infection?
As far as we can tell. It's on managed systems that are automatically updated, AV is up to date and active, firewall is on. But just takes clicking on something.
We project that Edge puts people at additional risk because it is the default product on the most insecure platform, that is also a default choice. It makes it super likely that your target is "accepting everything because it is default" rather than being thoughtful in their technology choices. It makes it an ideal public "flag" to make someone a higher than average potential for malware.
-
@scottalanmiller said in Staying in Ethics and Legal with ChatGPT usage?:
@Obsolesce said in Staying in Ethics and Legal with ChatGPT usage?:
If your English professor wants you to write an essay, and you didn't write it, then I see a problem.
Problem, yes. But not plagiarism. And not quite cheating, either. It's a weird grey area. Because it's a universal tool.
So, no ownership? It is cheating plain and simple. No different than looking over another's shoulder to pick up on what they are doing to write a test.
Have we really gone that far that "grey" justifies virtually any kind of behaviour? No culpability? No ownership? No responsibility for one's actions?
Wow Scott. That's so sad and a total antithesis to what we've taught our kids in our home school.
That's like having a bot lift weights for me then going home and telling my wife that I did the required exercise for the day.
No way. That's just bunk.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
It is cheating plain and simple. No different than looking over another's shoulder to pick up on what they are doing to write a test.
How do you come to that conclusion? Where does the cheating come from? From whom are you taking the content? No one. It's a tool.
From this logic, how do you allow spellcheckers, Grammarly and other forms of "cheating" on the parts that don't matter?
I would say, by definition, if you consider this cheating, you can only do so by making the project the busy work and not the output. Basically defining education as the avoidance of learning or value, rather than the increase of it.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
So, no ownership?
Correct, in the same way that a typewriter, calculator, spell checker or Grammar assistant do not own the work that they help you to create. It still requires a human to operate those tools, and this one. No different.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
Have we really gone that far that "grey" justifies virtually any kind of behaviour? No culpability? No ownership? No responsibility for one's actions?
Grey? What's grey? Using tools to write isn't just the basis for human improvement, the purpose of education is to teach humans to excel at the portions that we don't have tools to do.
What you say doesn't make sense. Culpability for doing the right thing? Responsibly using available tools so that you can focus on educational value? What kind of culpability is that?
You are using "cheating" as a foregone conclusion. But I can't even find where there is a basis for the conversation. How is using a writing tool ever been cheating, and where else is it cheating? I don't see any component here that would qualify as a piece to see as grey. It's black and white, this is a tool, there's no responsibility and no culpability because its the RIGHT thing to do.
If you are writing papers and NOT using the available tools, aren't you just wasting time and admitting that the point is to waste time rather than to grow? Who is responsible for that? Who is culpable for that approach to "education?"
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
That's like having a bot lift weights for me then going home and telling my wife that I did the required exercise for the day.
No way. That's just bunk.In your example, having a bot move weights does not accomplish the goal. In the use of ChatGPT, it does. So polar opposite things.
If your goal was to "move weights around", basically to do busy work to waste your time, then yes, using a bot to do it would be the most responsible approach. Why would a human waste time doing something of no or worse, negative, value? Just to try to excuse the expenditure of time without needing to engage their brains.
That's what professors are doing. Everything you describe sounds like you are upset that we are exposing the education system. But you are blaming the students for exposing it rather than the professors and teachers who haven't been doing their jobs all along.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
Have we really gone that far that "grey" justifies virtually any kind of behaviour?
Try it in reverse. Try to justify to me being a PhD student or an employee and having access to ChatGPT and not using it. As an educator and employer, I see avoiding the use of the available tools as lazy and wrong. If you feel that it is justified to excuse the grey area of doing manual work where none is needed and doesn't add value (in my estimation) then explain to me the opposite.... how do you even excuse not using the available tools and just filling the student and/or employee's time with pointless busywork?
-
Just to be fair, I understand that you can justify acting unethically or lazily or in a "grey area" by saying you have incompetent or unethical professors, or your job is worthless and the goal is to waste time. But assume ethical, competent educators - meaning they are there for you to grow your potential and learn that which is valuable instead of using busywork as a way to hide that they are not doing their job; or ethical managers at work who want the company to maximize profits not keep unnecessary workers busy doing pointless tasks to appear like they need more headcount.
-
@scottalanmiller said in Staying in Ethics and Legal with ChatGPT usage?:
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
So, no ownership?
Correct, in the same way that a typewriter, calculator, spell checker or Grammar assistant do not own the work that they help you to create. It still requires a human to operate those tools, and this one. No different.
In those examples the original content comes from the mind of the one hitting the keys.
Original, as in created, as in inspired by and written down, as in it came from the person themselves not some machine.
Seriously Scott?
What's the saying? "Possession is 9/10ths of the law." ?
Having a machine spit out content then presenting it as something I created is a lie.
A shovel is a tool. A screwdriver is a tool. A computer is a tool.
Content is the creator's own.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
In those examples the original content comes from the mind of the one hitting the keys.
To some degree, but not entirely. And the same is true of ChatGPT. You still need a competent operator to make it produce useful output. The average person can't operate it to get a good PhD thesis, for example. So it still comes from the mind of the operator of the tool.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
Original, as in created, as in inspired by and written down, as in it came from the person themselves not some machine.
Seriously Scott?Dead serious. As serious as it gets. A human operated tool, no matter how complex, is still just a tool. I'm unclear how you can think that operating a tool means stealing. Stealing... from whom?
You are redefining the very concept of stealing and tools to try to make an argument.
And even if you could do so, which I believe you can't even start to, to what end? What is your end goal? To hamper students and workers and enforce manual labor for future generations where none is warranted? To hold humanity back and make us doing repetitive, pointless tasks and not keep using computers or automation or search engines or spell checkers?
Even if we assume the seemingly crazy notion that the tools own the output, not the operator, we STILL have to ask... isn't is still universally a good thing to use the automation that we can use so that humanity and grow and education can focus on teaching concepts rather than teaching the mechanisms of expressing those concepts?
In BOTH ethical positions I believe there is no grey area. The tools are black and white good to use. And that the tools exist is universality good. I see no ground, not an inch, to claim either can has a negative. And I've not heard anyone give such an argument yet. On what ground do you consider any of this not an absolutely stunningly positive move?
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
What's the saying? "Possession is 9/10ths of the law." ?
Right, and as the operator, you have 100% of the possession.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
Having a machine spit out content then presenting it as something I created is a lie.
A shovel is a tool. A screwdriver is a tool. A computer is a tool.
Content is the creator's own.So you recognize other tools, what makes this one different to you? It is a tool and requires an operator. The output of tools is owned by the operator of those tools. Even if the shovel moves the dirt you say that you moved the dirt. Even if a screwdriver turns a screw you say that you screwed in the screw.
Apply your rules universally and the answer is clear. By your own example, clearly ChatGPT is just a tool and the output is the product of the operator.
-
Let me ask in another way.... can you make a general rule that makes the things you want to include as allowed (spell checkers, word processors, printers, Grammarly and other tools that remove tasks once considered critical for education or labor) and disallows whatever ones you think shouldn't be allowed (I have no idea what you think shouldn't be allowed so I can't give an e.g. here)?
Basically, without picking on specific products, can you define what it is you think is bad? Because all of those products were considered to create some degree of the "content" in their time. To me, the content is the concept, not the words, not the paper, not the font, not the Google search. To me, you are trying to assign all the value to the mechanisms of writing, and not the ideas and subject matter.
-
@scottalanmiller said in Staying in Ethics and Legal with ChatGPT usage?:
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
In those examples the original content comes from the mind of the one hitting the keys.
To some degree, but not entirely. And the same is true of ChatGPT. You still need a competent operator to make it produce useful output. The average person can't operate it to get a good PhD thesis, for example. So it still comes from the mind of the operator of the tool.
I've come to realize that will never again trust anything from anyone without proof that they know what they have presented.
That's going to be the differentiator in my mind.
There's always the person that comes from the "School of Good Enough" and it's those folks that will try and coast through using any "tool" they can without investing the time and effort needed to actually know something.
This conversation puts an entire segment of conferences, conventions, and so much more into question. I'll never be able to look at a person as being knowledgeable without having a conversation with them to determine whether they are a ChatGPT Clone or the real deal.
That's a really sad place to be in Scott.
As I mentioned above, ownership means, "I did that". It came from me not some machine. There is an inherent sense of accomplishment there.
There is no accomplishment having a machine do it for us. None.
And this is the point that doesn't seem to be registering here. Lots of deflections and explanations.
KISS
I do it = mine.
I write it out = mine.
I use a hammer, nails, a string, a tape measure, and a hose to build a house and I did it.To have someone that has gone through life having it done for them by the ChatGPTs of this world is beyond sad with that person missing one of the most important aspects of being human: I created that.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
I've come to realize that will never again trust anything from anyone without proof that they know what they have presented.
That's going to be the differentiator in my mind.Ah, but I think this just exposes something big. Why did you feel that they knew the material before? This is why PhD students defend their thesis... anyone can produce the paper, it's explaining and defending the concepts live that get you a degree, not the paper.
If you were using the written paper as a proxy for testing someone's knowledge on something then yes, I can see why you care about the mechanism rather than the output. But I'd say, again, all that is happening is that what was already true is being exposed.
As someone who made a living for a while in high school writing essays by request, I know how common it is to not have written your own paper. I don't know why people bought essays from me, whether they used them as source material, cited them, used them to summarize research, or turned them in as their own, not my concern. I was hired to write papers on topics. I didn't even know who got them. But I know actual intelligence went into writing papers that were used by people who knew nothing of the material.
When I went to university, the top ranked uni in the US at the time that I went, it was expected, that you had copied answers from previous years. They assumed what other places called cheating as a baseline and tested only above that. If you didn't take the time to obtain and memorize previous years tests you would almost certainly fail. They didn't test only on that, they assumed it as a baseline of available knowledge.
So I see what you are saying, but what I'm saying is that the inability to trust that producing a paper that has good words on it to reflect on the knowledge of the person turning it in was already there. ChatGPT isn't changing the game there, in any way. Authors of works, even if they wrote every word themselves, rarely understand the material deeply. Writing an essay simply is not a good test of that.
So the issue, and the solution, should be pretty clear. Essay writing was not ever a great process for education (or work), we've just exposed it beyond question now. But for many of us, that happened long, long ago. Now you need to focus on discourse, which has always been the case.
-
@PhlipElder said in Staying in Ethics and Legal with ChatGPT usage?:
There's always the person that comes from the "School of Good Enough" and it's those folks that will try and coast through using any "tool" they can without investing the time and effort needed to actually know something.
Right. And that's how I feel people avoiding AI in writing are approaching it. They think that it is "good enough" to grade or evaluate on the unimportant, automatable portions of writing that we don't need humans for... because that part is easier to test. Spelling, sentence structure, dates, names, citations... all of that requires effort, but not thought. No creativity, no value. To care about any of it is about making things "good enough."
If I'm evaluating someone's ability to learn a subject, I want to know if they can discuss it, live. How quickly they react. How much they can deal with the unknown (counter ideas thrown at them in real time), etc. Writing facts or even producing opinions with lots of free time is easy. Defending a position in real time requires you to actually know things, not have looked them up in the past. Very different things.
This is why when interviewing people we do conversations. Anyone can answer questions potentially, even people with no idea what they are answering, but carrying on a meandering, deep conversation where ideas or bantered about and applying cross domain knowledge in real time is required tests something very different.