Jump to content

What have I done?


Featured Posts

41 minutes ago, MtB said:

 

I'd imagine if you found and deleted the CWF biscuits (the term 'cookies' is too American for my taste), you'd find yourself logged out anyway.

 

 

 

I'd imagine if you found and deleted the CWF biscuits (the term 'cookies' is too American for my taste), you'd find yourself logged out anyway.

 

 

Looks like the AI cookies have given you a stutter

Link to comment
Share on other sites

I don't like those 'captcha' things at all. 

 

It seems wrong for a robot to be asking if you are really a human..

 

Some sort of bizarre role reversal going on.

 

Surely the human should be asking the robot to verify itself...

 

 

 

 

Link to comment
Share on other sites

1 minute ago, magnetman said:

I don't like those 'captcha' things at all. 

 

Nor me. They are clearly designed to be ambiguous. Like, is that bicycle wheel ever so slightly over the edge of the square? And if so why can't I see it on the edge of the next square? And what about that moped in one of the squares? Does that count as a bicycle? Stuff like that gets you making value judgements with no idea of the standards by which your answers will be judged. I just keep requesting new pictures until a sensible one gets offered.

 

I quite liked the spoof one which was a photo of Boris Johnson divided up into the nine squares. The caption said "Select every box containing a liar".

 

 

Link to comment
Share on other sites

1 minute ago, buccaneer66 said:

I can never read those damn text ones

 

Oh dear! 

 

I quite like the ones that say stuff like "Select the answer to two plus three". Or "What colour is a frog?" These require any AI taking the test to genuinely read and understand....

 

 

 

 

Link to comment
Share on other sites

10 hours ago, magnetman said:

I don't like those 'captcha' things at all. 

 

It seems wrong for a robot to be asking if you are really a human..

 

Some sort of bizarre role reversal going on.

 

Surely the human should be asking the robot to verify itself...

Appropriate XKCD cartoon.

machine_learning_captcha.png

https://xkcd.com/license.html

  • Haha 2
Link to comment
Share on other sites

Surely we can beat these AI beggars at their own game? I asked ChatGPT “How would a moderator detect generated content on an internet forum”

The response was:-

 

Detecting generated content on an internet forum, such as content generated by automated bots or AI models like GPT-3, can be challenging but important for maintaining the integrity and quality of the forum. Here are some techniques and strategies a moderator can use to identify generated content:

  1. Review User Profiles:

    • Check the user profile for suspicious activity, such as a high volume of posts in a short amount of time or a lack of personal information.

    • Look for signs of automation, like generic usernames or profile pictures.

  2. Check for Repetitive Patterns:

    • Analyze the content for repetitive or formulaic patterns. Generated content often follows a template or lacks the nuances of human language.

  3. Unusual Behavior:

    • Look for unusual behavior, such as posting at odd hours consistently or posting the same content across multiple threads or forums.

  4. Inconsistent Responses:

    • Ask questions or engage the user in a way that requires a deeper understanding of the topic. Automated responses may struggle to provide coherent or contextually appropriate answers.

  5. Language and Grammar:

    • Evaluate the quality of language and grammar in posts. AI-generated content might have occasional grammatical errors, unusual phrasing, or overly complex language.

  6. Content Quality:

    • Assess the quality of the content. If it's excessively generic, vague, or doesn't contribute meaningfully to the discussion, it could be generated content.

  7. Reverse Image Search:

    • If the user is posting images, perform a reverse image search to check if the images have been reused from the internet.

  8. Check for Plagiarism:

    • Use plagiarism detection tools to see if the content has been copied from other sources, including AI-generated text.

  9. Monitor Posting Frequency:

    • Keep an eye on how frequently a user is posting. Bots and AI models can post at a much higher rate than humans.

  10. Use AI Detection Tools:

    • There are AI-based tools and services available that can help moderators identify generated content. These tools can analyze text and patterns to flag potentially automated posts.

  11. Community Reporting:

    • Encourage forum members to report suspicious content. Users can often spot generated content based on their experience.

  12. Compare Against Known Models:

    • Some moderators may be familiar with common AI models used for content generation, like GPT-3.5. They can compare content against the output of these models to identify similarities.

  13. Manual Review:

    • Ultimately, a moderator may need to manually review content that is suspected to be generated. This can be time-consuming but is often necessary for accurate detection.

Remember that while these techniques can help detect generated content, no method is foolproof. A combination of human judgment and automated tools is often the most effective approach to maintain the quality and authenticity of an internet forum. Additionally, it's essential to have clear forum rules and guidelines in place to discourage the use of automated content generation and ensure a healthy online community.

 

 

Am I now part of the problem???

 

Link to comment
Share on other sites

16 minutes ago, Bacchus said:

Surely we can beat these AI beggars at their own game? I asked ChatGPT “How would a moderator detect generated content on an internet forum”

 

The response was:-

 

 

 

Detecting generated content on an internet forum, such as content generated by automated bots or AI models like GPT-3, can be challenging but important for maintaining the integrity and quality of the forum. Here are some techniques and strategies a moderator can use to identify generated content:

 

  1. Review User Profiles:

     

    • Check the user profile for suspicious activity, such as a high volume of posts in a short amount of time or a lack of personal information.

       

    • Look for signs of automation, like generic usernames or profile pictures.

       

  2. Check for Repetitive Patterns:

     

    • Analyze the content for repetitive or formulaic patterns. Generated content often follows a template or lacks the nuances of human language.

       

  3. Unusual Behavior:

     

    • Look for unusual behavior, such as posting at odd hours consistently or posting the same content across multiple threads or forums.

       

  4. Inconsistent Responses:

     

    • Ask questions or engage the user in a way that requires a deeper understanding of the topic. Automated responses may struggle to provide coherent or contextually appropriate answers.

       

  5. Language and Grammar:

     

    • Evaluate the quality of language and grammar in posts. AI-generated content might have occasional grammatical errors, unusual phrasing, or overly complex language.

       

  6. Content Quality:

     

    • Assess the quality of the content. If it's excessively generic, vague, or doesn't contribute meaningfully to the discussion, it could be generated content.

       

  7. Reverse Image Search:

     

    • If the user is posting images, perform a reverse image search to check if the images have been reused from the internet.

       

  8. Check for Plagiarism:

     

    • Use plagiarism detection tools to see if the content has been copied from other sources, including AI-generated text.

       

  9. Monitor Posting Frequency:

     

    • Keep an eye on how frequently a user is posting. Bots and AI models can post at a much higher rate than humans.

       

  10. Use AI Detection Tools:

     

    • There are AI-based tools and services available that can help moderators identify generated content. These tools can analyze text and patterns to flag potentially automated posts.

       

  11. Community Reporting:

     

    • Encourage forum members to report suspicious content. Users can often spot generated content based on their experience.

       

  12. Compare Against Known Models:

     

    • Some moderators may be familiar with common AI models used for content generation, like GPT-3.5. They can compare content against the output of these models to identify similarities.

       

  13. Manual Review:

     

    • Ultimately, a moderator may need to manually review content that is suspected to be generated. This can be time-consuming but is often necessary for accurate detection.

       

Remember that while these techniques can help detect generated content, no method is foolproof. A combination of human judgment and automated tools is often the most effective approach to maintain the quality and authenticity of an internet forum. Additionally, it's essential to have clear forum rules and guidelines in place to discourage the use of automated content generation and ensure a healthy online community.

 

 

 

 

 

Am I now part of the problem???

 

 

 

I think Jen is already on 1, 4, 6, 11 and 13...

1 hour ago, Jen-in-Wellies said:

Appropriate XKCD cartoon.

machine_learning_captcha.png

https://xkcd.com/license.html

It's not quite as funny as it seems - captchas are an excellent way of training a machine how to think like a human... In @MtB example, the computer might know well that a Moped could be a bicycle but learns from captcha tests that 73% of humans think it isn't. So can be more like a human next time...

 

Next...

image.png.2ae5b96efb414f1446fea5d20ab898bd.png

  • Greenie 1
Link to comment
Share on other sites

On 06/10/2023 at 13:19, Tracy D'arth said:

You are not alone. I did make a request for plain English because I do not understand but They simply requoted with an additional error. 

 

Sorry for the typo. Perhaps updating it via my phone was not a great idea. I have since amended the FR&G to correct this.

 

The FR&G are intended to be short and concise. However, if there is anything that is still unclear or ambiguous, please feel free to let us know and we will be more than happy to clarify and/or review the FR&G as necessary. 

 

As an aside, whenever we update the FR&G (which is not often) the forum software will ask members to review & accept. This includes any amendments to fix any typos. 

On 06/10/2023 at 12:24, 1st ade said:

Give Jen a break (please)

 

Seconded

  • Greenie 2
Link to comment
Share on other sites

On 06/10/2023 at 18:28, rusty69 said:

I'm not sure how I feel about being described as an artificial inseminatted boater. 

Whether you are inseminatttted or not is your problem, but being called an artificial boater is highly offensive and should be reported.

Link to comment
Share on other sites

On 06/10/2023 at 19:42, MtB said:

I'd imagine if you found and deleted the CWF biscuits (the term 'cookies' is too American for my taste), you'd find yourself logged out anyway

 I too had always thought that "cookie" was a purely American term. However,  in a story in "William", one of Richmal Cromptons "Just William" books, and written in 1929, what are clearly gingerbread men, are called   "cookie boys". Like all the stories, it is set in an English village in the Home Counties. 

20231007_195911-1.jpg

Edited by Ronaldo47
typos
Link to comment
Share on other sites

On 06/10/2023 at 16:31, Puffling said:

I may be being a little slow here, but is asking a new or existing member to click a button declaring that they are not a bot going to prevent the infiltration by said AI?

Annoying as it may be, there is a point. It doesn't stop infiltration, but it makes it absolutely obvious that the infiltrator knew what they were doing and broke the T's and C's (by giving an untruth on application) which is, itself, reason to ban them. And that this applies even if it was a human registering, with the intention of handing the account to a Bot when they were in.

  • Greenie 1
Link to comment
Share on other sites

5 hours ago, 1st ade said:

Annoying as it may be, there is a point. It doesn't stop infiltration, but it makes it absolutely obvious that the infiltrator knew what they were doing and broke the T's and C's (by giving an untruth on application) which is, itself, reason to ban them. And that this applies even if it was a human registering, with the intention of handing the account to a Bot when they were in.

 

Having clicked on "OK" in annoyance to just get rid of the screen, my curiosity has now been piqued. 

 

Is there any way to revisit and actually read what I agreed to?!

 

 

Link to comment
Share on other sites

33 minutes ago, MtB said:

 

Having clicked on "OK" in annoyance to just get rid of the screen, my curiosity has now been piqued. 

 

Is there any way to revisit and actually read what I agreed to?!

 

 

 

Top of the page far right under More the Rules and Guidelines.

  • Greenie 2
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.