• 0 Posts
  • 37 Comments
Joined 1 year ago
cake
Cake day: July 18th, 2023

help-circle









  • I wouldn’t say worst, but maybe greatest difference in expectation vs reality - “My Time at Portia”.

    Cutscenes and voice acting were janky. The UI felt like it was originally an MMO and feels odd for a single player game. The gameplay loop felt tedious and seemed to disrespect the player’s time.

    Maybe I needed to give it more time, but for a game that I thought had generally good/great reviews, it wasn’t clicking for me.



  • So the issue is not that they don’t have diverse training data, the issue is that not all things get equal representation. So their trained model will have biases to produce a white person when you ask generically for a “person”. To prevent it from always spitting out a white person when someone prompts the model for a generic person, they inject additional words into the prompt, like “racially ambiguous”. Therefore it occasionally encourages/forces more diversity in the results. The issue is that these models are too complex for these kinds of approaches to work seamlessly.





  • The difference with the analogy you’re making is that the significance/value of my marriage is not dependent on other marriages. My Xbox continuing to get support does however depend on M$oft valuing their hardware. Opening up their exclusives seems like they’re communicating that they don’t care about hardware sales (therefore the hardware itself) as much as game/software sales.

    So the reason why I’m concerned if they decide to go this route is that they won’t maintain the hardware I own. That’s probably not 100% accurate and I “technically” knew the risk when I bought it, but still not a great feeling.

    Overall though I’m anti exclusives, so longterm I would hope it’s a good thing.


  • Yes, and most likely more of a paradigm shift. The way deep learning models work is largely around static statistical models. The main issue here isn’t the statistical side, but the static nature. For AGI this is a significant hurdle because as the world evolves, or simply these models run into new circumstances, the models will fail.

    Its largely the reason why autonomous vehicles have sorta hit a standstill. It’s the last 1% (what if an intersection is out, what if the road is poorly maintained, etc.) that are so hard for these models as they require “thought” and not just input/output.

    LLMs have shown that large quantities of data seem to approach some sort of generalized knowledge, but researchers don’t necessarily agree on that https://arxiv.org/abs/2206.07682. So if we can’t get to more emergent abilities, it’s unlikely AGI is on the way. But as you said, combining and interweaving these systems may get something close.