So you just asked the most confusing thing about AWS service names due to how names changed over time.
Before S3 had an archival tier, there existed a separate service that AWS named AWS Glacier Storage, and then renamed to AWS S3 Glacier.
Around 2012 AWS started adding tiers to S3 which made the standalone service redundant. I received you look at S3 proper unless you have something like a Synology that can directly integrate with the older job based API used by the original glacier service.
So, let's say I have a 1TB archival file, single tarball, and I upload it to a brand new S3 bucket, without version, special features, etc, except it has a life cycle policy to move objects from S3 standard to S3 Glacier instant access after 0 days. So effectively, I upload the file and it moves to Glacier class storage.
The S3 standard is $24/tb/month, and lets say worst case scenario our data sits on standard for one whole day before moving.
$0.77+$0.005 (API cost of the put)
Then there is the lifecycle charge to move the data from standard to glacier, with one request per object each way. Since we only have one object the cost is
$0.004 out of standard
$0.02 into glacier
The cost of glacier instant tier is $4.1/tb/month. Since we would be there all but one day, the cost on the first bill would be:
$3.95
The second month onwards you would pay just the $4.1/month unless you are constantly adding or removing.
Let's say six months later you download your 1tb archive file. That would incur a cost of up to $30.
Now I know that seems complicated and expensive. It is, because it is providing services to me in my former role as director of engineering, with complex needs and budgets to pay for stuff. It doesn't make sense as a large-scale backup of personal data, unless you also want to leverage other AWS services, or you are truly just dumping the data away and will likely never need to retrieve it.
S3 is great for complying with HIPAA, feeding data into a cdn, and generally dumping data around in performant way. I've literally dropped a petabyte off data into S3 and it just took it and did its thing.
In my personal AWS account I use S3 as a place to dump cache contents built by lambda functions and served up by API gateway. Doing stuff like that is super cheap. I also use private git repos (code commit), private container registry (ecr), and container host (ECS), and it is nice have all of that stuff just click together.
For backing up my personal computer, I use iDrive personal and OneDrive, where I don't have to worry about the cost per object, etc. iDrive (not an Apple service) let's you backup multiple devices to their platform and keeps them versioned.
Anyway, happy to help answer questions. Have a great day.