Chris Lowe Posted October 6, 2023 Report Share Posted October 6, 2023 41 minutes ago, MtB said: I'd imagine if you found and deleted the CWF biscuits (the term 'cookies' is too American for my taste), you'd find yourself logged out anyway. I'd imagine if you found and deleted the CWF biscuits (the term 'cookies' is too American for my taste), you'd find yourself logged out anyway. Looks like the AI cookies have given you a stutter Link to comment Share on other sites More sharing options...
MtB Posted October 6, 2023 Report Share Posted October 6, 2023 5 minutes ago, buccaneer66 said: Looks like the AI cookies have given you a stutter No I dunnit on purpose to prove I am a genuinely stoopids and not an artificially intelligents. Link to comment Share on other sites More sharing options...
magnetman Posted October 6, 2023 Report Share Posted October 6, 2023 I don't like those 'captcha' things at all. It seems wrong for a robot to be asking if you are really a human.. Some sort of bizarre role reversal going on. Surely the human should be asking the robot to verify itself... Link to comment Share on other sites More sharing options...
MtB Posted October 6, 2023 Report Share Posted October 6, 2023 1 minute ago, magnetman said: I don't like those 'captcha' things at all. Nor me. They are clearly designed to be ambiguous. Like, is that bicycle wheel ever so slightly over the edge of the square? And if so why can't I see it on the edge of the next square? And what about that moped in one of the squares? Does that count as a bicycle? Stuff like that gets you making value judgements with no idea of the standards by which your answers will be judged. I just keep requesting new pictures until a sensible one gets offered. I quite liked the spoof one which was a photo of Boris Johnson divided up into the nine squares. The caption said "Select every box containing a liar". Link to comment Share on other sites More sharing options...
Chris Lowe Posted October 6, 2023 Report Share Posted October 6, 2023 I can never read those damn text ones Link to comment Share on other sites More sharing options...
MtB Posted October 6, 2023 Report Share Posted October 6, 2023 1 minute ago, buccaneer66 said: I can never read those damn text ones Oh dear! I quite like the ones that say stuff like "Select the answer to two plus three". Or "What colour is a frog?" These require any AI taking the test to genuinely read and understand.... Link to comment Share on other sites More sharing options...
Alan de Enfield Posted October 6, 2023 Report Share Posted October 6, 2023 17 minutes ago, MtB said: "What colour is a frog?" Dunno - what colour is a frog ? Link to comment Share on other sites More sharing options...
Tracy D'arth Posted October 6, 2023 Author Report Share Posted October 6, 2023 What is a frog? Link to comment Share on other sites More sharing options...
Alan de Enfield Posted October 6, 2023 Report Share Posted October 6, 2023 7 minutes ago, Tracy D'arth said: What is a frog? Maybe @Stilllearning can help with that. Link to comment Share on other sites More sharing options...
David Mack Posted October 7, 2023 Report Share Posted October 7, 2023 2 hours ago, Tracy D'arth said: What is a frog? 1 Link to comment Share on other sites More sharing options...
MtB Posted October 7, 2023 Report Share Posted October 7, 2023 Alan obviously can't work out what answer they want to the question "What colour is a frog?", so will fail any Recaptcha test with that question! Link to comment Share on other sites More sharing options...
Jen-in-Wellies Posted October 7, 2023 Report Share Posted October 7, 2023 10 hours ago, magnetman said: I don't like those 'captcha' things at all. It seems wrong for a robot to be asking if you are really a human.. Some sort of bizarre role reversal going on. Surely the human should be asking the robot to verify itself... Appropriate XKCD cartoon. https://xkcd.com/license.html 2 Link to comment Share on other sites More sharing options...
Bacchus Posted October 7, 2023 Report Share Posted October 7, 2023 Surely we can beat these AI beggars at their own game? I asked ChatGPT “How would a moderator detect generated content on an internet forum” The response was:- Detecting generated content on an internet forum, such as content generated by automated bots or AI models like GPT-3, can be challenging but important for maintaining the integrity and quality of the forum. Here are some techniques and strategies a moderator can use to identify generated content: Review User Profiles: Check the user profile for suspicious activity, such as a high volume of posts in a short amount of time or a lack of personal information. Look for signs of automation, like generic usernames or profile pictures. Check for Repetitive Patterns: Analyze the content for repetitive or formulaic patterns. Generated content often follows a template or lacks the nuances of human language. Unusual Behavior: Look for unusual behavior, such as posting at odd hours consistently or posting the same content across multiple threads or forums. Inconsistent Responses: Ask questions or engage the user in a way that requires a deeper understanding of the topic. Automated responses may struggle to provide coherent or contextually appropriate answers. Language and Grammar: Evaluate the quality of language and grammar in posts. AI-generated content might have occasional grammatical errors, unusual phrasing, or overly complex language. Content Quality: Assess the quality of the content. If it's excessively generic, vague, or doesn't contribute meaningfully to the discussion, it could be generated content. Reverse Image Search: If the user is posting images, perform a reverse image search to check if the images have been reused from the internet. Check for Plagiarism: Use plagiarism detection tools to see if the content has been copied from other sources, including AI-generated text. Monitor Posting Frequency: Keep an eye on how frequently a user is posting. Bots and AI models can post at a much higher rate than humans. Use AI Detection Tools: There are AI-based tools and services available that can help moderators identify generated content. These tools can analyze text and patterns to flag potentially automated posts. Community Reporting: Encourage forum members to report suspicious content. Users can often spot generated content based on their experience. Compare Against Known Models: Some moderators may be familiar with common AI models used for content generation, like GPT-3.5. They can compare content against the output of these models to identify similarities. Manual Review: Ultimately, a moderator may need to manually review content that is suspected to be generated. This can be time-consuming but is often necessary for accurate detection. Remember that while these techniques can help detect generated content, no method is foolproof. A combination of human judgment and automated tools is often the most effective approach to maintain the quality and authenticity of an internet forum. Additionally, it's essential to have clear forum rules and guidelines in place to discourage the use of automated content generation and ensure a healthy online community. Am I now part of the problem??? Link to comment Share on other sites More sharing options...
1st ade Posted October 7, 2023 Report Share Posted October 7, 2023 16 minutes ago, Bacchus said: Surely we can beat these AI beggars at their own game? I asked ChatGPT “How would a moderator detect generated content on an internet forum” The response was:- Detecting generated content on an internet forum, such as content generated by automated bots or AI models like GPT-3, can be challenging but important for maintaining the integrity and quality of the forum. Here are some techniques and strategies a moderator can use to identify generated content: Review User Profiles: Check the user profile for suspicious activity, such as a high volume of posts in a short amount of time or a lack of personal information. Look for signs of automation, like generic usernames or profile pictures. Check for Repetitive Patterns: Analyze the content for repetitive or formulaic patterns. Generated content often follows a template or lacks the nuances of human language. Unusual Behavior: Look for unusual behavior, such as posting at odd hours consistently or posting the same content across multiple threads or forums. Inconsistent Responses: Ask questions or engage the user in a way that requires a deeper understanding of the topic. Automated responses may struggle to provide coherent or contextually appropriate answers. Language and Grammar: Evaluate the quality of language and grammar in posts. AI-generated content might have occasional grammatical errors, unusual phrasing, or overly complex language. Content Quality: Assess the quality of the content. If it's excessively generic, vague, or doesn't contribute meaningfully to the discussion, it could be generated content. Reverse Image Search: If the user is posting images, perform a reverse image search to check if the images have been reused from the internet. Check for Plagiarism: Use plagiarism detection tools to see if the content has been copied from other sources, including AI-generated text. Monitor Posting Frequency: Keep an eye on how frequently a user is posting. Bots and AI models can post at a much higher rate than humans. Use AI Detection Tools: There are AI-based tools and services available that can help moderators identify generated content. These tools can analyze text and patterns to flag potentially automated posts. Community Reporting: Encourage forum members to report suspicious content. Users can often spot generated content based on their experience. Compare Against Known Models: Some moderators may be familiar with common AI models used for content generation, like GPT-3.5. They can compare content against the output of these models to identify similarities. Manual Review: Ultimately, a moderator may need to manually review content that is suspected to be generated. This can be time-consuming but is often necessary for accurate detection. Remember that while these techniques can help detect generated content, no method is foolproof. A combination of human judgment and automated tools is often the most effective approach to maintain the quality and authenticity of an internet forum. Additionally, it's essential to have clear forum rules and guidelines in place to discourage the use of automated content generation and ensure a healthy online community. Am I now part of the problem??? I think Jen is already on 1, 4, 6, 11 and 13... 1 hour ago, Jen-in-Wellies said: Appropriate XKCD cartoon. https://xkcd.com/license.html It's not quite as funny as it seems - captchas are an excellent way of training a machine how to think like a human... In @MtB example, the computer might know well that a Moped could be a bicycle but learns from captcha tests that 73% of humans think it isn't. So can be more like a human next time... Next... 1 Link to comment Share on other sites More sharing options...
RichM Posted October 7, 2023 Report Share Posted October 7, 2023 On 06/10/2023 at 13:19, Tracy D'arth said: You are not alone. I did make a request for plain English because I do not understand but They simply requoted with an additional error. Sorry for the typo. Perhaps updating it via my phone was not a great idea. I have since amended the FR&G to correct this. The FR&G are intended to be short and concise. However, if there is anything that is still unclear or ambiguous, please feel free to let us know and we will be more than happy to clarify and/or review the FR&G as necessary. As an aside, whenever we update the FR&G (which is not often) the forum software will ask members to review & accept. This includes any amendments to fix any typos. On 06/10/2023 at 12:24, 1st ade said: Give Jen a break (please) Seconded 2 Link to comment Share on other sites More sharing options...
Tony Brooks Posted October 7, 2023 Report Share Posted October 7, 2023 I can't see what all the fuss is about. It seems standard practice when terms and conditions are updated by internet based companies. 2 Link to comment Share on other sites More sharing options...
system 4-50 Posted October 7, 2023 Report Share Posted October 7, 2023 On 06/10/2023 at 18:28, rusty69 said: I'm not sure how I feel about being described as an artificial inseminatted boater. Whether you are inseminatttted or not is your problem, but being called an artificial boater is highly offensive and should be reported. Link to comment Share on other sites More sharing options...
Ronaldo47 Posted October 7, 2023 Report Share Posted October 7, 2023 (edited) On 06/10/2023 at 19:42, MtB said: I'd imagine if you found and deleted the CWF biscuits (the term 'cookies' is too American for my taste), you'd find yourself logged out anyway I too had always thought that "cookie" was a purely American term. However, in a story in "William", one of Richmal Cromptons "Just William" books, and written in 1929, what are clearly gingerbread men, are called "cookie boys". Like all the stories, it is set in an English village in the Home Counties. Edited October 7, 2023 by Ronaldo47 typos Link to comment Share on other sites More sharing options...
Alan de Enfield Posted October 7, 2023 Report Share Posted October 7, 2023 13 minutes ago, Ronaldo47 said: written in 1929, It's not long after the Yanks landed here and were billeted in Southern England (along with their Gum and Cookies) Link to comment Share on other sites More sharing options...
1st ade Posted October 8, 2023 Report Share Posted October 8, 2023 On 06/10/2023 at 16:31, Puffling said: I may be being a little slow here, but is asking a new or existing member to click a button declaring that they are not a bot going to prevent the infiltration by said AI? Annoying as it may be, there is a point. It doesn't stop infiltration, but it makes it absolutely obvious that the infiltrator knew what they were doing and broke the T's and C's (by giving an untruth on application) which is, itself, reason to ban them. And that this applies even if it was a human registering, with the intention of handing the account to a Bot when they were in. 1 Link to comment Share on other sites More sharing options...
MtB Posted October 8, 2023 Report Share Posted October 8, 2023 5 hours ago, 1st ade said: Annoying as it may be, there is a point. It doesn't stop infiltration, but it makes it absolutely obvious that the infiltrator knew what they were doing and broke the T's and C's (by giving an untruth on application) which is, itself, reason to ban them. And that this applies even if it was a human registering, with the intention of handing the account to a Bot when they were in. Having clicked on "OK" in annoyance to just get rid of the screen, my curiosity has now been piqued. Is there any way to revisit and actually read what I agreed to?! Link to comment Share on other sites More sharing options...
Tony Brooks Posted October 8, 2023 Report Share Posted October 8, 2023 33 minutes ago, MtB said: Having clicked on "OK" in annoyance to just get rid of the screen, my curiosity has now been piqued. Is there any way to revisit and actually read what I agreed to?! Top of the page far right under More the Rules and Guidelines. 2 Link to comment Share on other sites More sharing options...
Featured Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now