The OFFICIAL tech stuff thread
-
@Hog with our luck he will buy LOT from Lithu and then use our subscription fees to buy some commie or European forum and merge them.
-
One of our servers, probably the main server for most business functions, has 5 terabytes of RAM and is otherwise configured up the wazoo with the most premium disk and CPUs that Azure has to offer.
It runs like shit. It’s always run like shit but we had a software version upgrade and now it’s often unusable. Some of the business functions that used to take 10 minutes (but should have been much less) can take hours now and it’s affecting production.
There’s a 24/7 bridge call set up for the crisis that I get sucked into occasionally that occasionally produces actions like*Hog, explain why this complex application that you’ve never seen before today and is slow and report back in an hour with your recommendations". And they’ll hassle you for updates exactly within the hour if you haven’t reported back because you don’t understand wtf the app is doing functionally let alone technically…
A couple of times it’s looked like I’m going to have to rewrite whole apps or re-architect others while the business bleeds and IT management drums their fingers and hovers over my shoulders.
Thankfully it seems that the penny has dropped that it’s not an app specific problem but something wrong with the system. I’m hearing things like some process or other trying to use 50 gigabytes of cache that’s only been configured to allow 5 megabytes.
Of course, I’m not counting on that so I’m trying to learn as much as I can about the apps and processes I might have to rewrite at no notice.
Fuck me, IT is stressful sometimes. I have some talents but one of them isn’t thinking well and understanding stuff under extreme time pressure.
-
@Hog said in The OFFICIAL tech stuff thread:
One of our servers, probably the main server for most business functions, has 5 terabytes of RAM and is otherwise configured up the wazoo with the most premium disk and CPUs that Azure has to offer.
It runs like shit. It’s always run like shit but we had a software version upgrade and now it’s often unusable. Some of the business functions that used to take 10 minutes (but should have been much less) can take hours now and it’s affecting production.
There’s a 24/7 bridge call set up for the crisis that I get sucked into occasionally that occasionally produces actions like*Hog, explain why this complex application that you’ve never seen before today and is slow and report back in an hour with your recommendations". And they’ll hassle you for updates exactly within the hour if you haven’t reported back because you don’t understand wtf the app is doing functionally let alone technically…
A couple of times it’s looked like I’m going to have to rewrite whole apps or re-architect others while the business bleeds and IT management drums their fingers and hovers over my shoulders.
Thankfully it seems that the penny has dropped that it’s not an app specific problem but something wrong with the system. I’m hearing things like some process or other trying to use 50 gigabytes of cache that’s only been configured to allow 5 megabytes.
Of course, I’m not counting on that so I’m trying to learn as much as I can about the apps and processes I might have to rewrite at no notice.
Fuck me, IT is stressful sometimes. I have some talents but one of them isn’t thinking well and understanding stuff under extreme time pressure.
I just dealt with a server hang that unlike most of ours actually affects patient care. You could have that pressure.
Anything with 5 TB of memory is either a supercomputer modeling climate or Mach 25 missiles, is a central Google AI, or is badly configured and Microsoft is enjoying charging rental on the moron designing the thing.
I’m guessing the latter.
-
@Kilemall said in The OFFICIAL tech stuff thread:
@Hog said in The OFFICIAL tech stuff thread:
One of our servers, probably the main server for most business functions, has 5 terabytes of RAM and is otherwise configured up the wazoo with the most premium disk and CPUs that Azure has to offer.
It runs like shit. It’s always run like shit but we had a software version upgrade and now it’s often unusable. Some of the business functions that used to take 10 minutes (but should have been much less) can take hours now and it’s affecting production.
There’s a 24/7 bridge call set up for the crisis that I get sucked into occasionally that occasionally produces actions like*Hog, explain why this complex application that you’ve never seen before today and is slow and report back in an hour with your recommendations". And they’ll hassle you for updates exactly within the hour if you haven’t reported back because you don’t understand wtf the app is doing functionally let alone technically…
A couple of times it’s looked like I’m going to have to rewrite whole apps or re-architect others while the business bleeds and IT management drums their fingers and hovers over my shoulders.
Thankfully it seems that the penny has dropped that it’s not an app specific problem but something wrong with the system. I’m hearing things like some process or other trying to use 50 gigabytes of cache that’s only been configured to allow 5 megabytes.
Of course, I’m not counting on that so I’m trying to learn as much as I can about the apps and processes I might have to rewrite at no notice.
Fuck me, IT is stressful sometimes. I have some talents but one of them isn’t thinking well and understanding stuff under extreme time pressure.
I just dealt with a server hang that unlike most of ours actually affects patient care. You could have that pressure.
Anything with 5 TB of memory is either a supercomputer modeling climate or Mach 25 missiles, is a central Google AI, or is badly configured and Microsoft is enjoying charging rental on the moron designing the thing.
I’m guessing the latter.
Welcome to 2005 ;)
-
@tigger said in The OFFICIAL tech stuff thread:
@Kilemall said in The OFFICIAL tech stuff thread:
@Hog said in The OFFICIAL tech stuff thread:
One of our servers, probably the main server for most business functions, has 5 terabytes of RAM and is otherwise configured up the wazoo with the most premium disk and CPUs that Azure has to offer.
It runs like shit. It’s always run like shit but we had a software version upgrade and now it’s often unusable. Some of the business functions that used to take 10 minutes (but should have been much less) can take hours now and it’s affecting production.
There’s a 24/7 bridge call set up for the crisis that I get sucked into occasionally that occasionally produces actions like*Hog, explain why this complex application that you’ve never seen before today and is slow and report back in an hour with your recommendations". And they’ll hassle you for updates exactly within the hour if you haven’t reported back because you don’t understand wtf the app is doing functionally let alone technically…
A couple of times it’s looked like I’m going to have to rewrite whole apps or re-architect others while the business bleeds and IT management drums their fingers and hovers over my shoulders.
Thankfully it seems that the penny has dropped that it’s not an app specific problem but something wrong with the system. I’m hearing things like some process or other trying to use 50 gigabytes of cache that’s only been configured to allow 5 megabytes.
Of course, I’m not counting on that so I’m trying to learn as much as I can about the apps and processes I might have to rewrite at no notice.
Fuck me, IT is stressful sometimes. I have some talents but one of them isn’t thinking well and understanding stuff under extreme time pressure.
I just dealt with a server hang that unlike most of ours actually affects patient care. You could have that pressure.
Anything with 5 TB of memory is either a supercomputer modeling climate or Mach 25 missiles, is a central Google AI, or is badly configured and Microsoft is enjoying charging rental on the moron designing the thing.
I’m guessing the latter.
Welcome to 2005 ;)
Hmm, our virtual servers probably have that but not the individual carved ones. Mainframes do similar workloads to Unix or windows business machines but with a lot less memory due to superior I/O.
Storage is Petabytes, but I haven’t heard of petabyte memory.
-
@Kilemall said in The OFFICIAL tech stuff thread:
@Hog said in The OFFICIAL tech stuff thread:
One of our servers, probably the main server for most business functions, has 5 terabytes of RAM and is otherwise configured up the wazoo with the most premium disk and CPUs that Azure has to offer.
It runs like shit. It’s always run like shit but we had a software version upgrade and now it’s often unusable. Some of the business functions that used to take 10 minutes (but should have been much less) can take hours now and it’s affecting production.
There’s a 24/7 bridge call set up for the crisis that I get sucked into occasionally that occasionally produces actions like*Hog, explain why this complex application that you’ve never seen before today and is slow and report back in an hour with your recommendations". And they’ll hassle you for updates exactly within the hour if you haven’t reported back because you don’t understand wtf the app is doing functionally let alone technically…
A couple of times it’s looked like I’m going to have to rewrite whole apps or re-architect others while the business bleeds and IT management drums their fingers and hovers over my shoulders.
Thankfully it seems that the penny has dropped that it’s not an app specific problem but something wrong with the system. I’m hearing things like some process or other trying to use 50 gigabytes of cache that’s only been configured to allow 5 megabytes.
Of course, I’m not counting on that so I’m trying to learn as much as I can about the apps and processes I might have to rewrite at no notice.
Fuck me, IT is stressful sometimes. I have some talents but one of them isn’t thinking well and understanding stuff under extreme time pressure.
I just dealt with a server hang that unlike most of ours actually affects patient care. You could have that pressure.
Anything with 5 TB of memory is either a supercomputer modeling climate or Mach 25 missiles, is a central Google AI, or is badly configured and Microsoft is enjoying charging rental on the moron designing the thing.
I’m guessing the latter.
The whole ERP + is run on an in-memory database. I only found out about the 5 terabyte RAM figure when I asked the obvious question yesterday, “Has anyone discussed upgrading the machine?” That’s when I was told it was already way above spec. It’s a big company with a lot of data but I asked Bard what was typical for large companies using that product and it said one to two terabytes of RAM. I don’t know if we just have more data or just don’t know how to configure it. I did see one of the external support consultants from the vendor post a screenshot of Azure pricing in the bridge chat so maybe we’re going to give it even more :)
(But seriously, I doubt we’re going to add more RAM - I didn’t read actually read the context of the post).
-
@Kilemall said in The OFFICIAL tech stuff thread:
@tigger said in The OFFICIAL tech stuff thread:
@Kilemall said in The OFFICIAL tech stuff thread:
@Hog said in The OFFICIAL tech stuff thread:
One of our servers, probably the main server for most business functions, has 5 terabytes of RAM and is otherwise configured up the wazoo with the most premium disk and CPUs that Azure has to offer.
It runs like shit. It’s always run like shit but we had a software version upgrade and now it’s often unusable. Some of the business functions that used to take 10 minutes (but should have been much less) can take hours now and it’s affecting production.
There’s a 24/7 bridge call set up for the crisis that I get sucked into occasionally that occasionally produces actions like*Hog, explain why this complex application that you’ve never seen before today and is slow and report back in an hour with your recommendations". And they’ll hassle you for updates exactly within the hour if you haven’t reported back because you don’t understand wtf the app is doing functionally let alone technically…
A couple of times it’s looked like I’m going to have to rewrite whole apps or re-architect others while the business bleeds and IT management drums their fingers and hovers over my shoulders.
Thankfully it seems that the penny has dropped that it’s not an app specific problem but something wrong with the system. I’m hearing things like some process or other trying to use 50 gigabytes of cache that’s only been configured to allow 5 megabytes.
Of course, I’m not counting on that so I’m trying to learn as much as I can about the apps and processes I might have to rewrite at no notice.
Fuck me, IT is stressful sometimes. I have some talents but one of them isn’t thinking well and understanding stuff under extreme time pressure.
I just dealt with a server hang that unlike most of ours actually affects patient care. You could have that pressure.
Anything with 5 TB of memory is either a supercomputer modeling climate or Mach 25 missiles, is a central Google AI, or is badly configured and Microsoft is enjoying charging rental on the moron designing the thing.
I’m guessing the latter.
Welcome to 2005 ;)
Hmm, our virtual servers probably have that but not the individual carved ones. Mainframes do similar workloads to Unix or windows business machines but with a lot less memory due to superior I/O.
Storage is Petabytes, but I haven’t heard of petabyte memory.
Any cache database is gonna want a lot of memory. Guessing AWS has a bunch of those for their products and large businesses concerned with latency in delivering stuff on the internetz.
-
@Hog said in The OFFICIAL tech stuff thread:
@Kilemall said in The OFFICIAL tech stuff thread:
@Hog said in The OFFICIAL tech stuff thread:
One of our servers, probably the main server for most business functions, has 5 terabytes of RAM and is otherwise configured up the wazoo with the most premium disk and CPUs that Azure has to offer.
It runs like shit. It’s always run like shit but we had a software version upgrade and now it’s often unusable. Some of the business functions that used to take 10 minutes (but should have been much less) can take hours now and it’s affecting production.
There’s a 24/7 bridge call set up for the crisis that I get sucked into occasionally that occasionally produces actions like*Hog, explain why this complex application that you’ve never seen before today and is slow and report back in an hour with your recommendations". And they’ll hassle you for updates exactly within the hour if you haven’t reported back because you don’t understand wtf the app is doing functionally let alone technically…
A couple of times it’s looked like I’m going to have to rewrite whole apps or re-architect others while the business bleeds and IT management drums their fingers and hovers over my shoulders.
Thankfully it seems that the penny has dropped that it’s not an app specific problem but something wrong with the system. I’m hearing things like some process or other trying to use 50 gigabytes of cache that’s only been configured to allow 5 megabytes.
Of course, I’m not counting on that so I’m trying to learn as much as I can about the apps and processes I might have to rewrite at no notice.
Fuck me, IT is stressful sometimes. I have some talents but one of them isn’t thinking well and understanding stuff under extreme time pressure.
I just dealt with a server hang that unlike most of ours actually affects patient care. You could have that pressure.
Anything with 5 TB of memory is either a supercomputer modeling climate or Mach 25 missiles, is a central Google AI, or is badly configured and Microsoft is enjoying charging rental on the moron designing the thing.
I’m guessing the latter.
The whole ERP + is run on an in-memory database. I only found out about the 5 terabyte RAM figure when I asked the obvious question yesterday, “Has anyone discussed upgrading the machine?” That’s when I was told it was already way above spec. It’s a big company with a lot of data but I asked Bard what was typical for large companies using that product and it said one to two terabytes of RAM. I don’t know if we just have more data or just don’t know how to configure it. I did see one of the external support consultants from the vendor post a screenshot of Azure pricing in the bridge chat so maybe we’re going to give it even more :)
(But seriously, I doubt we’re going to add more RAM - I didn’t read actually read the context of the post).
Did you ever ask them why they are asking an application developer to troubleshoot server configurations instead of someone that knows about that shit? And maybe get the networking assholes in the room too to see if they might be the bottleneck?
-
@Gators1 said in The OFFICIAL tech stuff thread:
@Hog said in The OFFICIAL tech stuff thread:
@Kilemall said in The OFFICIAL tech stuff thread:
@Hog said in The OFFICIAL tech stuff thread:
One of our servers, probably the main server for most business functions, has 5 terabytes of RAM and is otherwise configured up the wazoo with the most premium disk and CPUs that Azure has to offer.
It runs like shit. It’s always run like shit but we had a software version upgrade and now it’s often unusable. Some of the business functions that used to take 10 minutes (but should have been much less) can take hours now and it’s affecting production.
There’s a 24/7 bridge call set up for the crisis that I get sucked into occasionally that occasionally produces actions like*Hog, explain why this complex application that you’ve never seen before today and is slow and report back in an hour with your recommendations". And they’ll hassle you for updates exactly within the hour if you haven’t reported back because you don’t understand wtf the app is doing functionally let alone technically…
A couple of times it’s looked like I’m going to have to rewrite whole apps or re-architect others while the business bleeds and IT management drums their fingers and hovers over my shoulders.
Thankfully it seems that the penny has dropped that it’s not an app specific problem but something wrong with the system. I’m hearing things like some process or other trying to use 50 gigabytes of cache that’s only been configured to allow 5 megabytes.
Of course, I’m not counting on that so I’m trying to learn as much as I can about the apps and processes I might have to rewrite at no notice.
Fuck me, IT is stressful sometimes. I have some talents but one of them isn’t thinking well and understanding stuff under extreme time pressure.
I just dealt with a server hang that unlike most of ours actually affects patient care. You could have that pressure.
Anything with 5 TB of memory is either a supercomputer modeling climate or Mach 25 missiles, is a central Google AI, or is badly configured and Microsoft is enjoying charging rental on the moron designing the thing.
I’m guessing the latter.
The whole ERP + is run on an in-memory database. I only found out about the 5 terabyte RAM figure when I asked the obvious question yesterday, “Has anyone discussed upgrading the machine?” That’s when I was told it was already way above spec. It’s a big company with a lot of data but I asked Bard what was typical for large companies using that product and it said one to two terabytes of RAM. I don’t know if we just have more data or just don’t know how to configure it. I did see one of the external support consultants from the vendor post a screenshot of Azure pricing in the bridge chat so maybe we’re going to give it even more :)
(But seriously, I doubt we’re going to add more RAM - I didn’t read actually read the context of the post).
Did you ever ask them why they are asking an application developer to troubleshoot server configurations instead of someone that knows about that shit? And maybe get the networking assholes in the room too to see if they might be the bottleneck?
It’s a fair point but the users had been complaining generally about performance since the upgrade went live and the technical team that looks after performance had been repeating that, aside from the odd spike, the system was healthy.
Then one of the critical processes for production shat itself and the symptom was “user clicks this button and has to wait 2 to 5 minutes, then clicks this button and has to wait…” etc. Since that was the specific issue that got elevated to P2⁺ and I was nominated before the upgrade for 24/7 hypercare (all apps, not just the ones I’d worked on) I got roped in on the crisis team to fix the issue. I didn’t know all the context so I started with looking at what the app was doing.
⁺(I don’t think there is a P1 or I’ve never heard of it. I assume if it happens it means I should start looking for another job.)
-
@Hog said in The OFFICIAL tech stuff thread:
@Gators1 said in The OFFICIAL tech stuff thread:
@Hog said in The OFFICIAL tech stuff thread:
@Kilemall said in The OFFICIAL tech stuff thread:
@Hog said in The OFFICIAL tech stuff thread:
One of our servers, probably the main server for most business functions, has 5 terabytes of RAM and is otherwise configured up the wazoo with the most premium disk and CPUs that Azure has to offer.
It runs like shit. It’s always run like shit but we had a software version upgrade and now it’s often unusable. Some of the business functions that used to take 10 minutes (but should have been much less) can take hours now and it’s affecting production.
There’s a 24/7 bridge call set up for the crisis that I get sucked into occasionally that occasionally produces actions like*Hog, explain why this complex application that you’ve never seen before today and is slow and report back in an hour with your recommendations". And they’ll hassle you for updates exactly within the hour if you haven’t reported back because you don’t understand wtf the app is doing functionally let alone technically…
A couple of times it’s looked like I’m going to have to rewrite whole apps or re-architect others while the business bleeds and IT management drums their fingers and hovers over my shoulders.
Thankfully it seems that the penny has dropped that it’s not an app specific problem but something wrong with the system. I’m hearing things like some process or other trying to use 50 gigabytes of cache that’s only been configured to allow 5 megabytes.
Of course, I’m not counting on that so I’m trying to learn as much as I can about the apps and processes I might have to rewrite at no notice.
Fuck me, IT is stressful sometimes. I have some talents but one of them isn’t thinking well and understanding stuff under extreme time pressure.
I just dealt with a server hang that unlike most of ours actually affects patient care. You could have that pressure.
Anything with 5 TB of memory is either a supercomputer modeling climate or Mach 25 missiles, is a central Google AI, or is badly configured and Microsoft is enjoying charging rental on the moron designing the thing.
I’m guessing the latter.
The whole ERP + is run on an in-memory database. I only found out about the 5 terabyte RAM figure when I asked the obvious question yesterday, “Has anyone discussed upgrading the machine?” That’s when I was told it was already way above spec. It’s a big company with a lot of data but I asked Bard what was typical for large companies using that product and it said one to two terabytes of RAM. I don’t know if we just have more data or just don’t know how to configure it. I did see one of the external support consultants from the vendor post a screenshot of Azure pricing in the bridge chat so maybe we’re going to give it even more :)
(But seriously, I doubt we’re going to add more RAM - I didn’t read actually read the context of the post).
Did you ever ask them why they are asking an application developer to troubleshoot server configurations instead of someone that knows about that shit? And maybe get the networking assholes in the room too to see if they might be the bottleneck?
It’s a fair point but the users had been complaining generally about performance since the upgrade went live and the technical team that looks after performance had been repeating that, aside from the odd spike, the system was healthy.
Then one of the critical processes for production shat itself and the symptom was “user clicks this button and has to wait 2 to 5 minutes, then clicks this button and has to wait…” etc. Since that was the specific issue that got elevated to P2⁺ and I was nominated before the upgrade for 24/7 hypercare (all apps, not just the ones I’d worked on) I got roped in on the crisis team to fix the issue. I didn’t know all the context so I started with looking at what the app was doing.
⁺(I don’t think there is a P1 or I’ve never heard of it. I assume if it happens it means I should start looking for another job.)
Didn’t you say your contract was ending anyway? Also did you try rebooting?
-
@Gators1 said in The OFFICIAL tech stuff thread:
@Hog said in The OFFICIAL tech stuff thread:
@Gators1 said in The OFFICIAL tech stuff thread:
@Hog said in The OFFICIAL tech stuff thread:
@Kilemall said in The OFFICIAL tech stuff thread:
@Hog said in The OFFICIAL tech stuff thread:
One of our servers, probably the main server for most business functions, has 5 terabytes of RAM and is otherwise configured up the wazoo with the most premium disk and CPUs that Azure has to offer.
It runs like shit. It’s always run like shit but we had a software version upgrade and now it’s often unusable. Some of the business functions that used to take 10 minutes (but should have been much less) can take hours now and it’s affecting production.
There’s a 24/7 bridge call set up for the crisis that I get sucked into occasionally that occasionally produces actions like*Hog, explain why this complex application that you’ve never seen before today and is slow and report back in an hour with your recommendations". And they’ll hassle you for updates exactly within the hour if you haven’t reported back because you don’t understand wtf the app is doing functionally let alone technically…
A couple of times it’s looked like I’m going to have to rewrite whole apps or re-architect others while the business bleeds and IT management drums their fingers and hovers over my shoulders.
Thankfully it seems that the penny has dropped that it’s not an app specific problem but something wrong with the system. I’m hearing things like some process or other trying to use 50 gigabytes of cache that’s only been configured to allow 5 megabytes.
Of course, I’m not counting on that so I’m trying to learn as much as I can about the apps and processes I might have to rewrite at no notice.
Fuck me, IT is stressful sometimes. I have some talents but one of them isn’t thinking well and understanding stuff under extreme time pressure.
I just dealt with a server hang that unlike most of ours actually affects patient care. You could have that pressure.
Anything with 5 TB of memory is either a supercomputer modeling climate or Mach 25 missiles, is a central Google AI, or is badly configured and Microsoft is enjoying charging rental on the moron designing the thing.
I’m guessing the latter.
The whole ERP + is run on an in-memory database. I only found out about the 5 terabyte RAM figure when I asked the obvious question yesterday, “Has anyone discussed upgrading the machine?” That’s when I was told it was already way above spec. It’s a big company with a lot of data but I asked Bard what was typical for large companies using that product and it said one to two terabytes of RAM. I don’t know if we just have more data or just don’t know how to configure it. I did see one of the external support consultants from the vendor post a screenshot of Azure pricing in the bridge chat so maybe we’re going to give it even more :)
(But seriously, I doubt we’re going to add more RAM - I didn’t read actually read the context of the post).
Did you ever ask them why they are asking an application developer to troubleshoot server configurations instead of someone that knows about that shit? And maybe get the networking assholes in the room too to see if they might be the bottleneck?
It’s a fair point but the users had been complaining generally about performance since the upgrade went live and the technical team that looks after performance had been repeating that, aside from the odd spike, the system was healthy.
Then one of the critical processes for production shat itself and the symptom was “user clicks this button and has to wait 2 to 5 minutes, then clicks this button and has to wait…” etc. Since that was the specific issue that got elevated to P2⁺ and I was nominated before the upgrade for 24/7 hypercare (all apps, not just the ones I’d worked on) I got roped in on the crisis team to fix the issue. I didn’t know all the context so I started with looking at what the app was doing.
⁺(I don’t think there is a P1 or I’ve never heard of it. I assume if it happens it means I should start looking for another job.)
Didn’t you say your contract was ending anyway? Also did you try rebooting?
Yeah I’ll be finished in three weeks time. In between having 5 teams chats going off like a pinball machine, I’m trying to document extra stuff for the next poor slob. That poor slob might turn out to be me if my two bosses get their way but it’s far from guaranteed. I’m not sure whether I’d be happier if I get extended next year or happier if I don’t. Anyway, I told them I’m away all of January and I’m not fussed if they don’t have me back until February or March. Regardless, I’m going to stop thinking about it as soon as I hit the beach.
-
@Kilemall said in The OFFICIAL tech stuff thread:
@tigger said in The OFFICIAL tech stuff thread:
@Kilemall said in The OFFICIAL tech stuff thread:
@Hog said in The OFFICIAL tech stuff thread:
One of our servers, probably the main server for most business functions, has 5 terabytes of RAM and is otherwise configured up the wazoo with the most premium disk and CPUs that Azure has to offer.
It runs like shit. It’s always run like shit but we had a software version upgrade and now it’s often unusable. Some of the business functions that used to take 10 minutes (but should have been much less) can take hours now and it’s affecting production.
There’s a 24/7 bridge call set up for the crisis that I get sucked into occasionally that occasionally produces actions like*Hog, explain why this complex application that you’ve never seen before today and is slow and report back in an hour with your recommendations". And they’ll hassle you for updates exactly within the hour if you haven’t reported back because you don’t understand wtf the app is doing functionally let alone technically…
A couple of times it’s looked like I’m going to have to rewrite whole apps or re-architect others while the business bleeds and IT management drums their fingers and hovers over my shoulders.
Thankfully it seems that the penny has dropped that it’s not an app specific problem but something wrong with the system. I’m hearing things like some process or other trying to use 50 gigabytes of cache that’s only been configured to allow 5 megabytes.
Of course, I’m not counting on that so I’m trying to learn as much as I can about the apps and processes I might have to rewrite at no notice.
Fuck me, IT is stressful sometimes. I have some talents but one of them isn’t thinking well and understanding stuff under extreme time pressure.
I just dealt with a server hang that unlike most of ours actually affects patient care. You could have that pressure.
Anything with 5 TB of memory is either a supercomputer modeling climate or Mach 25 missiles, is a central Google AI, or is badly configured and Microsoft is enjoying charging rental on the moron designing the thing.
I’m guessing the latter.
Welcome to 2005 ;)
Hmm, our virtual servers probably have that but not the individual carved ones. Mainframes do similar workloads to Unix or windows business machines but with a lot less memory due to superior I/O.
Storage is Petabytes, but I haven’t heard of petabyte memory.
None of this is on a physical computer. For the way how the data is stored, you can read this: https://static.googleusercontent.com/media/research.google.com/en//archive/bigtable-osdi06.pdf
For how this applies to SQL-like databases, read the design of BigQuery.
As for memory, that is divided between a lot of machines. You cannot simply write a normal binary that uses 5T of memory, you need to write it in a way that you will get 10000 machines, each of which will use 500MB of memory, to process the data.
So typically you’ll use some implementation of MapReduce, the article for that is here.These are all well written and easy to read articles that show how computers work these days and I think carry good didactic value.
-
Plus I think it’s really easy to try out because a lot of these cloud providers give you a small account for free, I believe I have one set up with Oracle…
-
Where’s the P2 call to discuss adding another 5 terabytes of RAM to this forum’s server?

-
Could do with 5 TB for sure!
-
Doesn’t always work though. I downloaded over a TB of ram over the years and still never got more than 30 FPS on WW2OL.
-
@Gators1 said in The OFFICIAL tech stuff thread:
Doesn’t always work though. I downloaded over a TB of ram over the years and still never got more than 30 FPS on WW2OL.
The issue is that the downloaded memory often fills your hard disk before it can be actually installed.
-
@tigger said in The OFFICIAL tech stuff thread:
@Gators1 said in The OFFICIAL tech stuff thread:
Doesn’t always work though. I downloaded over a TB of ram over the years and still never got more than 30 FPS on WW2OL.
The issue is that the downloaded memory often fills your hard disk before it can be actually installed.
Duh, I know that! That’s why I downloaded a NAS first and filled it with downloaded hard drives before I installed the downloaded memory!
-
This was high tech in the 1800s. I guess the French beat Edison kinda.
-
The real reason head-butting became unpopular at the end of the 18th century!
