• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: September 27th, 2023

help-circle
  • (because it was trained on real people who write with those quirks)

    Yes and no. Generally speaking, ML-Models are pulling towards the average and away from the extremes, meanwhile most people have weird quirks when they write. (For example my overuse of (), too many , instead of . and probably a few other things I’m unaware of)

    To make a completely different example, if you average the facial features of humans in a large group (size, position, orientation, etc. of everything) you get a conventionally very attractive person. But very, very few people are actually close to that ideal. This is because the average person, meaning a random person, has a few features that stray far from this ideal. Just by the sheer number of features, there’s a high chance some will end up out of bounds.

    A ML-Model will generally be punished during training for creating anything that contains such extremes, so the very human thing of being eccentric in any regards is trained away. If you’ve ever seen people generate anime-waifus with modern generative models you know exactly what I mean. Some methods can and are being deployed to try and keep/bring back those eccentricities, at least when asked for.

    On top of that, modern LLM chatbots have reinforcement learning part, where they learn how to write so that readers will enjoy reading it, which is no longer copying but instead “inventing” in a more trial-and-error style. Think of the videos on youtube you’ve seen of “AI learns to play x game”, where no training material of someone actually playing the game was used and the model still learned. I’m assuming that’s where the overuse of em-dash and quippy one liners come from. They were probably liked by either the human testers or the automated judges trained on the human feedback used in that process.




  • Mirodir@discuss.tchncs.detoFunny@sh.itjust.worksWorth It
    link
    fedilink
    arrow-up
    4
    arrow-down
    3
    ·
    edit-2
    3 months ago

    It’s not even only colloquial, it’s the scientific term for it.

    Edit: Even things that have nothing to do with machine learning or deep learning are AI. i.e. stupid rule based approaches (aka tons of if-else). Deep Learning is a subset of Machine Learning which is a subset of AI.






  • For sure, that’s why my main accusation is them directing traffic to their bad article (could even be an attempt at getting search engines to associate their article with “android games 2024”) and not the AI stuff. I just started with the AI accusation because it was funny to me when OP and you already talked about AI (in games).

    AI or not, the post is poorly written and has little to no informative content.

    I do agree with you though, some people through around AI accusations way too quickly. Especially when they spot mistakes. LLMs are very good at NOT making grammatical or syntactical mistakes in English. If anything, those mistakes are often a sign of authenticity.


  • What games use AI to enrich the user experience? Highly doubting that one.

    Even more so, I highly suspect OP is written with anything but AI. Even if we give them the benefit of the doubt that they wrote it by hand, it’s very suspicious that their article on mobile games in 2024 has a url that states they’re about 2021 and mentions mostly games from back then. Using the Wayback Machine (I would never give them a click) reveals that it’s (mostly) the same article over all those years with the year in the title updated and some layout changes to fit the layout of the website.

    While I cannot say with near certainty that OP is written by AI, I do feel confident saying that this post exists solely to direct traffic to that shitty article.



  • Mirodir@discuss.tchncs.detoProgrammer Humor@programming.devSus
    link
    fedilink
    arrow-up
    110
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Sure. You have to solve it from inside out:

    • not()…See comment below for this one, I was tricked is a base function that negates what’s inside (turning True to False and vice versa) giving it no parameter returns “True” (because no parameter counts as False)
    • str(x) turns x into a string, in this case it turns the boolean True into the text string ‘True’
    • min(x) returns the minimal element of an iterable. In this case the character ‘T’ because capital letters come before non-capital letters, otherwise it would return ‘e’ (I’m not entirely sure if it uses unicode, ascii or something else to compare characters, but usually capitals have a lower value than non-capitals and otherwise in alphabetical order ascending)
    • ord(x) returns the unicode number of x, in this case turning ‘T’ into the integer 84
    • range(x) creates an iterable from 0 to x (non-inclusive), in this case you can think of it as the list [0, 1, 2, …82, 83] (it’s technically an object of type range but details…)
    • sum(x) sums up all elements of a list, summing all numbers between 0 and 84 (non-inclusive) is 3486
    • chr(x) is the inverse of ord(x) and returns the character at position x, which, you guessed it, is ‘ඞ’ at position 3486.

    The huge coincidental part is that ඞ lies at a position that can be reached by a cumulative sum of integers between 0 and a given integer. From there on it’s only a question of finding a way to feed that integer into chr(sum(range(x)))


  • I think the humor is meant to be in the juxtaposition between “reference” in media contexts (e.g. “I am your father”) and “reference” in programming contexts and applying the latter context to the former one.

    What does “I’m your father” mean if the movie is jaws?

    I think the absurdity of that question is part of said humor. That being said, I didn’t find it funny either.





  • I’m not really sure how to describe it other than when I read a function to determine what it does then go to the next part of the code I’ve already forgotten how the function transforms the data

    This sounds to me like you could benefit from mentally using the information hiding principle for your functions. In other words: Outside of the function, the only thing that matters is “what goes in?” and “what comes out?”. The implementation details should not be important once you’re working on code outside of that function.

    To achieve this, maybe you could write a short comment right at the start of every function. One to two sentences detailing only the inputs/output of that function. e.g. “Accepts an image and a color and returns a mask that shows where that color is present.” if you later forget what the function does, all you need to do is read that one sentence to remember. If it’s too convoluted to write in one or two sentences, your function is likely trying to achieve too much at once and could (arguably “should”) be split up.

    Also on a different note: Don’t sell your ability to “cludge something together” short. If you ever plan to do this professionally or educationally, you will sadly inevitably run into situations where you have no choice but to deliver a quick and dirty solution over a clean and well thought out one.

    Edit: typos