I mean in software we have automated integration testing for...

npub1qdjn8j4gwgmkj3k5un775nq6q3q7mguv5tvajstmkdsqdja2havq03fqm7
hex
b55f41d0e19747a4c88159506884128c4215182c41976b801bcf93b998332ed9nevent
nevent1qqst2h6p6rsew3ayezq4j5rgssfgcss4rqkyr9mtsqdulyaenqejakgprpmhxue69uhhyetvv9ujuem4d36kwatvw5hx6mm9qgsqxefne258ydmfgm2wfl02fsdqgs0d5wx29kweg9amxcqxew4t7kqhuctunKind-1 (TextNote)
↳ Reply to 010df0c9... (npub1qyxlpj2gl6dt2nfvkl4yyrl6pr2hjkycrdh2dr5r42n7ktwn7pdqrdmu7u)
So many modern tools the path of least resistance is update all. I don't know how people operate like that. I test 1, then 2-3 in a batch, and so on a...
I mean in software we have automated integration testing for these purposes, but it's really difficult to reproduce in prod due to a billion different factors. Some ways around this now in the DC are separating data from config. That's what I try to do. Distros like fedora are coming out with new configuration based deployments. So you can configure your OS deployment as part of your infra as a config file.
With HA hypervisors, you can scratch deploy a new machine from a config file and mount shared data storage. Then upgrades are basically done from an IDE and can be easily tested before deployment. Then your hypervisor provide that abstraction layer so the VM doesn't know or care what it's running on.
But on the hypervisor side the same issue appears, now sitting on hardware again, it's availability in numbers, like it always has been.
Raw JSON
{
"kind": 1,
"id": "b55f41d0e19747a4c88159506884128c4215182c41976b801bcf93b998332ed9",
"pubkey": "036533caa872376946d4e4fdea4c1a0441eda38ca2d9d9417bb36006cbaabf58",
"created_at": 1773333180,
"tags": [
[
"e",
"c72c9827eb6b898b5f4cd29bb0018bd34fac394208e45568713018bf4356d58e",
"wss://nostr.land/",
"root",
"010df0c948fe9ab54d2cb7ea420ffa08d57958981b6ea68e83aaa7eb2dd3f05a"
],
[
"e",
"9e490974aa1c76643762c8c599f7750efbdf5a85650e907d6024a306387692bd",
"wss://nostr.wine/",
"reply",
"010df0c948fe9ab54d2cb7ea420ffa08d57958981b6ea68e83aaa7eb2dd3f05a"
],
[
"p",
"fea186c2a4678dbc437704eed2160846e8a781e5fb17056e9bb333840d5bdef2",
"wss://relay.damus.io/"
],
[
"p",
"010df0c948fe9ab54d2cb7ea420ffa08d57958981b6ea68e83aaa7eb2dd3f05a",
"wss://nostr.wine/"
],
[
"p",
"08bfc00b7f72e015f45c326f486bec16e4d5236b70e44543f1c5e86a8e21c76a",
"wss://nos.lol/"
],
[
"p",
"036533caa872376946d4e4fdea4c1a0441eda38ca2d9d9417bb36006cbaabf58",
"wss://relay.primal.net/"
]
],
"content": "I mean in software we have automated integration testing for these purposes, but it's really difficult to reproduce in prod due to a billion different factors. Some ways around this now in the DC are separating data from config. That's what I try to do. Distros like fedora are coming out with new configuration based deployments. So you can configure your OS deployment as part of your infra as a config file. \n\nWith HA hypervisors, you can scratch deploy a new machine from a config file and mount shared data storage. Then upgrades are basically done from an IDE and can be easily tested before deployment. Then your hypervisor provide that abstraction layer so the VM doesn't know or care what it's running on. \n\nBut on the hypervisor side the same issue appears, now sitting on hardware again, it's availability in numbers, like it always has been. ",
"sig": "aec0b2cf4b6c0dcc8bb8187b794457a6e8122cad75899f5d2b057cdd5c8d87c301e7c085daea63e868c0539bb403d8f541eec9a2e7e69c7f5140d988f4de609d"
}