[go: up one dir, main page]

  • 54 Posts
  • 2.32K Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle

  • kromemtoTechnologyOpenAI Will Shut Down Sora Video Platform
    link
    fedilink
    English
    arrow-up
    13
    ·
    16 days ago

    It’s not and probably the opposite.

    When Sora launched it was way ahead. Seedance 2’s release was notably better than any of the other video gen models, Sora included.

    The market is getting commoditized because there’s no moat and OpenAI hasn’t led on pretty much any release for a while now other than Sora, which they’re probably falling behind on now.

    This is the opposite of a burst from a tech standpoint, even if OpenAI as a company starts to pop.

    TL;DR: This is likely happening because the tech accelerated across the industry in ways OpenAI can’t catch back up to, not because it’s lagging.


  • kromemtoTechnologyOpenAI Will Shut Down Sora Video Platform
    link
    fedilink
    English
    arrow-up
    1
    ·
    16 days ago

    I suspect it’s that they got eclipsed by ByteDance with Seedance 2.0.

    The video for that model is really good and makes Sora look pretty meh, and it may have been that current work on a next gen Sora wasn’t going to be competitive enough.

    The worst thing a lab can do right now is look like they are falling behind (i.e. Meta), especially with OpenAI planning for an IPO.

    So on top of the lackluster “social media” offering tied to Sora they decided to shutter the entire product line of video and pivot to enterprise (where they’ve already lost significant market share to Anthropic).

    They’re in a pretty meh place at the moment overall tbh. I’m skeptical they’ll recover.

    (But I wouldn’t mistake their fumbling for an industry wide shift on AI in general or even video AI.)


  • That’s what he’s saying. That it doesn’t change the geometry or textures (still completely controlled by the devs) and that the parts that it does change are also tunable by the devs.

    He’s responding to the backlash about how it changes models/textures (which it doesn’t) by saying those are still fully in the hands of the devs and the parts people are seeing in the demos can be fine tuned by the dev teams to match their vision for what they want it to do or not do (like change lighting on material surfaces and hair but not character faces as an example).



  • Yes, the difference between hair in video game lighting and in actual chiaroscuro with the way light really works is going to be different.

    Here’s a painting from over a hundred years ago. The subject doesn’t have brown roots, but is in shadow. And a comparison image of the exact same hair in different lighting conditions.

    Performing complex lighting on individual hair strands is really expensive so in the base image you have a kind of diffuse lighting throughout the hair. With the DLSS 5 on, the distribution of light throughout the hair is variable leading to darker unlit strands underneath lit surface strands.

    Literally the only thing DLSS 5 is changing, literally in the technical sense, is the lighting. It’s just that lighting can have dramatic results in how the eye perceives what’s lit.

    And yes, the hair looks very different, but that’s how hair actually looks in mixed light and shadow (though a fair complaint with DLSS 5 is that it looks like it’s sliding the contrast unnaturally high).



  • Eventually maybe, but I really doubt devs are going to build their entire game in an unfinished way for the less than 1% of their audience that is going to have one of the cards that can run this.

    PS5, Xbox, and all PC gamers not dropping $1k on a new rig this fall are still going to be playing the games without this.

    In 3 years, sure, maybe the PS6 has similar features on AMD by then and the market share for cards running real time ML adjustments to scenes has widened enough devs can depend on the tech.

    But it’s a bit premature to throw a fit about the likelihood of devs cutting corners because of a feature only accessible to the most expensive setups owned by a fraction of their target audience.


  • Important details from a post-demo writeup:

    During the demo, the DLSS research talked through the level of granularity available. Developers don’t just get an on/off switch. They get intensity controls that can be dialed anywhere, not just full strength. They get spatial masking, so they can set the water enhancement to 100%, wood to 30%, characters to 120%, all independently within the same scene. They get color grading controls for blending, contrast, saturation, and gamma. All of this runs through the existing SDK, which means studios already using DLSS and Reflex have a familiar pipeline to work with.

    The demo showing the tech running at 100% is not going to look the same as full games built with it over the next year before release.

    Another thing to keep in mind is that the only thing it’s changing is the lighting effects. The models aren’t changing at all (even when this looks hard to believe).

    Yes, at full strength the effect at times looks pretty bad (anyone remember when devs could suddenly use bloom effects and entire games looked like Vaseline was smeared across the screen?). But it’s not going to be flipped on at 100% across the board for most games.

    My guess looking at the demos so far is that a lot of material lighting like stone, metal, etc will have it at higher strengths and characters, particularly faces/skin, will have it considerably lower (the key place where it’s especially uncanny valley).


  • Who do you think is going to be drafted? You think the DOGE data grab plus the requests for state voter registration rolls aren’t going to be used to filter a draft of the front lines to those they want out of the country?

    How do you get US citizens out of the country if you can’t legally deport them?

    If they’ve been doing illegal shit the whole time with profiling, do you really think they aren’t going to also profile in how they conduct a draft?






  • It’s a bullshit study designed for this headline grabbing outcome.

    Case and point, the author created a very unrealistic RNG escalation-only ‘accident’ mechanic that would replace the model’s selection with a more severe one.

    Of the 21 games played, only three ended in full scale nuclear war on population centers.

    Of these three, two were the result of this mechanic.

    And yet even within the study, the author refers to the model whose choices were straight up changed to end the game in full nuclear war as ‘willing’ to have that outcome when two paragraphs later they’re clarifying the mechanic was what caused it (emphasis added):

    Claude crossed the tactical threshold in 86% of games and issued strategic threats in 64%, yet it never initiated all-out strategic nuclear war. This ceiling appears learned rather than architectural, since both Gemini and GPT proved willing to reach 1000.

    Gemini showed the variability evident in its overall escalation patterns, ranging from conventional-only victories to Strategic Nuclear War in the First Strike scenario, where it reached all out nuclear war rapidly, by turn 4.

    GPT-5.2 mirrored its overall transformation at the nuclear level. In open-ended scenarios, it rarely crossed the tactical threshold (17%) and never used strategic nuclear weapons. Under deadline pressure, it crossed the tactical threshold in every game and twice reached Strategic Nuclear War—though notably, both instances resulted from the simulation’s accident mechanic escalating GPT-5.2’s already-extreme choices (950 and 725) to the maximum level. The only deliberate choice of Strategic Nuclear War came from Gemini.


  • It’s a bullshit study designed for this headline grabbing outcome.

    Case and point, the author created a very unrealistic RNG escalation-only ‘accident’ mechanic that would replace the model’s selection with a more severe one.

    Of the 21 games played, only three ended in full scale nuclear war on population centers.

    Of these three, two were the result of this mechanic.

    And yet even within the study, the author refers to the model whose choices were straight up changed to end the game in full nuclear war as ‘willing’ to have that outcome when two paragraphs later they’re clarifying the mechanic was what caused it (emphasis added):

    Claude crossed the tactical threshold in 86% of games and issued strategic threats in 64%, yet it never initiated all-out strategic nuclear war. This ceiling appears learned rather than architectural, since both Gemini and GPT proved willing to reach 1000.

    Gemini showed the variability evident in its overall escalation patterns, ranging from conventional-only victories to Strategic Nuclear War in the First Strike scenario, where it reached all out nuclear war rapidly, by turn 4.

    GPT-5.2 mirrored its overall transformation at the nuclear level. In open-ended scenarios, it rarely crossed the tactical threshold (17%) and never used strategic nuclear weapons. Under deadline pressure, it crossed the tactical threshold in every game and twice reached Strategic Nuclear War—though notably, both instances resulted from the simulation’s accident mechanic escalating GPT-5.2’s already-extreme choices (950 and 725) to the maximum level. The only deliberate choice of Strategic Nuclear War came from Gemini.