The OFFICIAL tech stuff thread
-
Whoa, we have clinical and billing docs in pdf. If they are talking reading those to help their lil AI monsters it’s war. We can’t eat HIPAA violations fees at that rate.
-
I expect to see bunch of AI-textured medieval legs to be running around soon enough.
-
Maybe you should use the AI to finally fucking finish your game? Like it’s been in early access for nearly as long as Star Citizen!
-
@Gators1 said in The OFFICIAL tech stuff thread:
Maybe you should use the AI to finally fucking finish your game? Like it’s been in early access for nearly as long as Star Citizen!
I just finished lazily slapping some placeholder textures on my crappy gun with Adobe Substance 3D Painter. Those AI legs might come after you with big irons on their hip if you don’t watch your mouth, gator boi.

-
@Jerraye said in The OFFICIAL tech stuff thread:
@Gators1 said in The OFFICIAL tech stuff thread:
Maybe you should use the AI to finally fucking finish your game? Like it’s been in early access for nearly as long as Star Citizen!
I just finished lazily slapping some placeholder textures on my crappy gun with Adobe Substance 3D Painter. Those AI legs might come after you with big irons on their hip if you don’t watch your mouth, gator boi.

Call it the GatorBuster!
-
Study finds a quarter of bosses hoped return to office would make employees quit
More than a third (37 percent) of respondents in leadership roles believed their employers had undertaken layoffs in the past 12 months as a result of too few people quitting in protest of RTO mandates, the study found.
-
I wonder how they’re gonna get their next round of cheap layoffs? “Hey listen everyone, we’re gonna need you to start working three days a week at the local landfill. Our research indicates that people working in offices aren’t productive enough.”
-
Ooh wee, I had a lycos email account until about 2006. I still technically have one, just can’t log in
I wonder if my inbox is full?
-
@Zeppelin said in The OFFICIAL tech stuff thread:
I wonder if my inbox is full?
I’ve got an old yahoo email account that they delete after I don’t log in for a few years. Then I log in and they reactivate the address but the email history is all gone. Rinse and repeat.
-
@Hog said in The OFFICIAL tech stuff thread:
@Zeppelin said in The OFFICIAL tech stuff thread:
I wonder if my inbox is full?
I’ve got an old yahoo email account that they delete after I don’t log in for a few years. Then I log in and they reactivate the address but the email history is all gone. Rinse and repeat.
I have one of those too. And in order to retrieve my password, I would need to receive an email, on my lycos account.
Or maybe it’s the other way around but they are both linked and locked
-
Worth a try getting in. You might get some good deals, pre-inflation prices on viagra.
-
I hate everything about this:

-
-
@Gators1 said in The OFFICIAL tech stuff thread:
Worth a try getting in. You might get some good deals, pre-inflation prices on viagra.
I’ve tried. Not for the Viagra, it for the porn mailing lists
-
Good idea IMO.
Edit: it’s paywalled. Copy paste below:
AI is coming for our anger
A SoftBank project is working on technology that takes the rage out of customer phone callsI’m a human being God damn it! My life has value! . . . I’m as mad as hell, and I’m not going to take this any more!
Howard Beale, the prophetically fuming anti-hero from the 1976 film Network, was certainly very angry. Increasingly, according to successive Gallup surveys of the world’s emotional state, we all are.
But possibly not for much longer if artificial intelligence has any say in it. AI was already coming for our jobs; now it is coming for our fury. The question is whether anything has a right to take that fury without permission, and whether anyone is ready to fight for our right to rage.
This month, the separately listed mobile arm of Masayoshi Son’s SoftBank technology empire revealed that it was developing an AI-powered system to protect browbeaten workers in call centres from down-the-line diatribes and the broad palette of verbal abuse that falls under the definition of customer harassment.
It is unclear if SoftBank was deliberately seeking to evoke dystopia when it named this project, but “EmotionCancelling Voice Conversion Engine” has a bleakness that would turn George Orwell green.
The technology, developed at an AI research institute established by SoftBank and the University of Tokyo, is still in its R&D phase, and the early demo version suggests there is plenty more work ahead. But the principle is already sort of working, and it is as weird as you might expect.
In theory, the voice-altering AI changes the rant of an angry human caller in real time so the person at the other end hears only a softened, innocuous version. The caller’s original vocabulary remains intact (for now; give dystopia time to solve that one). But, tonally, the rage is expunged. Commercialisation and installation in call centres, reckons SoftBank, can be expected sometime before March 2026.
Show video description
Softbank is developing an AI-powered system to protect workers in call centres from furious phone calls and customer harassment by altering voices to sound softer. The project is still at an early stage - this is a demo of how far it’s come, with the angry and the modified voice. © Softbank
As with so many of these projects, humans have collaborated for cash with their future AI overlords. The EmotionCancelling engine was trained using actors who performed a large range of angry phrases and a gamut of ways of giving outlet to ire such as shouting and shrieking. These provide the AI with the pitches and inflections to detect and replace.Set aside the various hellscapes this technology conjures up. The least imaginative among us can see ways in which real-time voice alteration could open a lot of perilous paths. The issue, for now, is ownership: the lightning evolution of AI is already severely testing questions of voice ownership by celebrities and others; SoftBank’s experiment is testing the ownership of emotion.
SoftBank’s project was clearly well intentioned. The idea apparently came to one of the company’s AI engineers who watched a film about rising abusiveness among Japanese customers towards service-sector workers — a phenomenon some ascribe to the crankiness of an ageing population and the erosion of service standards by acute labour shortages.
The EmotionCancelling engine is presented as a solution to the intolerable psychological burden placed on call centre operators, and the stress of being shouted at. As well as stripping rants of their frightening tone, the AI will step in to terminate conversations it deems have been too long or vile.
But protection of the workers should not be the only consideration here. Anger may be a very unpleasant and scary thing to receive, but it can be legitimate and there must be caution in artificially writing it out of the customer relations script — particularly if it only increases when the customer realises their expressed rage is being suppressed by a machine.
Businesses everywhere can — and do — warn customers against abusing staff. But removing anger from someone’s voice without their permission (or by burying that permission in fine print) steps over an important line, especially when AI is put in charge of the removal.
The line crossed is where a person’s emotion, or a certain tone of voice, is commoditised for treatment and neutralisation. Anger is an easy target for excision, but why not get AI to protect call centre operators from disappointment, sadness, urgency, despair or even gratitude? What if it were decided that some regional accents were more threatening than others and sandpapered by algorithm without their owners knowing?
In an extensive series of essays published last week, Leopold Aschenbrenner, a former researcher at OpenAI who worked on protecting society from the technology, warned that while everyone was talking about AI, “few have the faintest glimmer of what is about to hit them”.
Our best strategy, in the face of all this, may be to remain as mad as hell.
-
Wow, what shitbags. Not using AI to improve their shit service, instead use AI to reduce call center turnover to improve their bottom line. I think Putin has a bigger heart than some of our banks.
-

-
I would have thought the AI search answers would expose Google to lawsuits from people getting sick or dying after following dangerous and wrong advice (I’m sure their lawyers are better than me at this stuff though).
Now I’m wondering how the below doesn’t expose them to false advertising or damages from malware ads since they are now actively writing the sales pitch and not just publishing someone else’s claims.
-
Damn, I’m finding Copilot (formerly Bing Chat, and probably a half dozen other names) has taken a massive hit in usefulness recently. I’m now almost positive that it’s trying to match your query against a previously asked one and serving up the cached answer. I’ve tried rephrasing the question only to get the exact same answer back. Makes sense they’d try it given how expensive that shit is to process but it means that 30+% of the time I’m getting answers that are for questions I didn’t ask. I think we’re starting to see the wheels fall off of this whole fad. I think the company I’m working for is paying for this shit too.
-
Apparently AI is using all the power in the world and we will soon be back to using whale oil lamps.

