--- Log opened jue jun 16 00:11:37 2022 00:11 -!- Irssi: #friendica: Total of 7 nicks [0 ops, 0 halfops, 0 voices, 7 normal] 00:11 -!- Irssi: Join to #friendica was synced in 17 secs --- Log closed jue jun 16 03:26:01 2022 --- Log opened jue jun 16 03:26:11 2022 03:26 -!- Irssi: #friendica: Total of 7 nicks [0 ops, 0 halfops, 0 voices, 7 normal] 03:26 -!- Irssi: Join to #friendica was synced in 15 secs --- Log opened jue jun 16 09:39:21 2022 09:39 -!- Irssi: #friendica: Total of 7 nicks [0 ops, 0 halfops, 0 voices, 7 normal] 09:39 -!- Irssi: Join to #friendica was synced in 19 secs --- Log opened jue jun 16 09:55:10 2022 09:55 -!- Irssi: #friendica: Total of 7 nicks [0 ops, 0 halfops, 0 voices, 7 normal] 09:55 -!- Irssi: Join to #friendica was synced in 17 secs 16:07 < fikabot_> 💬 I just updated to 2022.06 16:08 < fikabot_> 💬 DB has been getting thrashed for about half an hour to the point where the site is unresponsive. Is the thrashing expected when upgrading from the previous revision? 16:08 < fikabot_> 💬 The unresponsiveness would be because it is under-resourced for that sort of operation (maxing out CPUs and memory) which would be my fault for not boosting resources before starting this process 16:11 < fikabot_> 💬 I don't have a specific information for this given case, but in general yes, an update really taxes your database and it might also place locks on it for a bunch of times. 16:11 < fikabot_> 💬 I wish it had a progress meter, though. 16:11 < fikabot_> 💬 yes :) 16:11 < fikabot_> 💬 rebooting in the middle of that process hose it up entirely or does it resume when it restarts? 16:13 < fikabot_> 💬 The last time I looked, the given upgrade created temporary tables and renamed it at the end of something like that. Hopefully, atomicity would be implemented as well. But are you sure you want to experiment with this right now? 16:13 < fikabot_> 💬 Why not just put up a scheduled maintenance banner on the site for a few hours? 16:13 < fikabot_> 💬 I always snapshot before I do an upgrade 16:13 < fikabot_> 💬 worst case I'm rolling back to right before the upgrade happened 16:14 < fikabot_> 💬 it was a graceful shutdown before memory pegged and seized everything up anyway 16:14 < fikabot_> 💬 I think I am going to roll back throw a lot of CPUs and memory at and try this again 16:14 < fikabot_> 💬 By the way, did the git repo always have this many screenshots in the README? 16:15 < fikabot_> 💬 How much memory did you allocate to the database node? 16:15 < fikabot_> 💬 it is all one giant node 16:15 < fikabot_> 💬 2 CPUs 2GB memory 16:15 < fikabot_> 💬 which is fine for day to day 16:16 < fikabot_> 💬 I recall seeing issues with certain queries blowing up in memory in certain circumstances (hopefully have been fixed since then). 16:16 < fikabot_> 💬 Like requiring 1GB of RAM or something like that. 16:16 < fikabot_> 💬 I think that was related to exporting, though. 16:17 < fikabot_> 💬 Yeah swap was up to 1.8 GB but memory pressure dropped back down a bit after about 15 minutes 16:17 < fikabot_> 💬 it may just need more CPUs 16:17 < fikabot_> 💬 literally dozens of SQL queries each consuming 6% of CPU or stalled waiting 16:24 < fikabot_> 💬 It would be nice if we could do incremental live upgrading where all queries would be scheduled sequentially and it would not overload CPU or RAM. 16:25 < fikabot_> 💬 That could be cool 16:26 < fikabot_> 💬 I imagine for busy instances with years of data the problem is much greater than for my new and snmall instance 16:26 < fikabot_> 💬 Yes. It is crucial to have input from the biggest instances. 16:26 < fikabot_> 💬 although there are lots of Twitter connections which probably drive that up for my instance 16:26 < fikabot_> 💬 Yes. It is crucial to have input from the operators of the biggest instances. 16:27 < fikabot_> 💬 Now that I'm doing round two, should I stop the daemons before upgrading and then restart after or does that sort itself out? 16:28 < fikabot_> 💬 I would definitely stop everything and do it within an offline maintenance window, but I may be too paranoid. 🙂 16:31 < fikabot_> 💬 Alright this time doing it with 8 dedicated cores and 32 GB of RAM :) 16:32 < fikabot_> 💬 DB using 4 aggregate CPUs :) 16:32 < fikabot_> 💬 unfortunately it does seem like a table lock thing because website still unresponsive for timeline pulls (but admin panel still comes up) 16:37 < fikabot_> 💬 now more than 5 CPUs :) 16:38 < fikabot_> 💬 And how high is your I/O load according to the graphs? 16:38 < fikabot_> 💬 Could you perhaps share some stats about the data volume as well? 16:38 < fikabot_> 💬 reads about 6.5 MB/s and writes about 2.5 MB/s 16:39 < fikabot_> 💬 And how many IOPS? Doesn't sound much (especially if you are on NVMe SSD) 16:39 < fikabot_> 💬 Let me see if I can Digital Ocean to generate that kind of graph 16:40 < fikabot_> 💬 There is a lot of extra disk storage being used during the update but it may be going to MySQL backup logs 16:41 < fikabot_> 💬 picked up an extra 2 GB of storage usage until I cut bait and restarted from snapshot 16:55 < fikabot_> 💬 It isn't doing it this time so that may have just been the swapfile filling up 16:57 < fikabot_> 💬 How much swap did you allocate? Also, have you considered setting up zram instead of swap? 17:11 < fikabot_> 💬 It's moot for now 17:11 < fikabot_> 💬 but 2 GB for 2 GB of memory 17:25 < fikabot_> 💬 n/m disk usage going up faster than before (which makes sense with more horespower and no memory constraint) 17:26 < fikabot_> 💬 Suppose database updates mean dropping all indexes, changing stuff and reindexing :-) 17:27 < fikabot_> 💬 lots of things going on :) 17:28 < fikabot_> 💬 at least with my instance 17:43 < fikabot_> 💬 Got past where I was but problem then is stuck in infinite loop since need twice as much storage as before and I was at a bit over 50% of disk usage 17:43 < fikabot_> 💬 so fills up as much as it can then restarts the whole process 17:44 < fikabot_> 💬 Going to have to punt on this upgrade for now 17:56 < fikabot_> 💬 bkil: indexes are often dropped before updates and reindexed once the changes are done. This is due to a lack of speed if you do not drop them before altering columns. 17:57 < fikabot_> 💬 Hank: Did you manage to see at which point the restart happened and caused by which error message? Could you perhaps save the logs before you restore the VM? 17:59 < fikabot_> 💬 I forced the restart 18:00 < fikabot_> 💬 which time? 18:00 < fikabot_> 💬 I was doing du and df 18:00 < fikabot_> 💬 df running out of storage 18:00 < fikabot_> 💬 du showing the friendicadb at 7.5 GB and the innodb.tmp (or something) crossing about 8 GB when the disk was exhausted and the disk usage of innodb collapses back down and starts filling up again 18:01 < fikabot_> 💬 over and over 18:01 < fikabot_> 💬 (I let it do it twice and then punted) 18:02 < fikabot_> 💬 probably logfiles Hank 18:02 < fikabot_> 💬 yep I'll buy that 18:02 < fikabot_> 💬 and have thought that I'm going to blow them away before I try this again 18:02 < fikabot_> 💬 Should be easy to find :-) 18:03 < fikabot_> 💬 I don't have time to do it today 18:03 < fikabot_> 💬 so it is moot 18:03 < fikabot_> 💬 I have a very short window because the logfiles were filling up the disk when set to the default 30 days anyway 18:03 < fikabot_> 💬 did I mention I hate MySQL 18:04 < fikabot_> 💬 I hate lots of stuff, I understand :-) 18:04 < fikabot_> 💬 actually binlogs are one level up 18:04 < fikabot_> 💬 so that is now 7.5 GB for the database legitimately 18:04 < fikabot_> 💬 and have about 700 MB of binlogs 18:05 < fikabot_> 💬 and you probably have loads of os logs too ... 18:06 < fikabot_> 💬 The /var/log directory is 1.4 GB in total 18:06 < fikabot_> 💬 most of that journal 18:06 < fikabot_> 💬 you can decrease that too 18:07 < fikabot_> 💬 it may squeak me through on this upgrade but in six months it wont 18:07 < fikabot_> 💬 at the next upgrade 18:08 < fikabot_> 💬 One thing that is an issue is column width and collation. Older database engines do not allow varchar byond 191 chars. Later versions do, that is something I bumped into at Hubzilla too. We should be able to make that dynamic imho 18:08 < fikabot_> 💬 I'm running this on an instance with 25 GB of disk space 18:09 < fikabot_> 💬 I know most of of the storage is third party data not user generated data 18:09 < fikabot_> 💬 there are only two of us and we don't post *that* much lol 18:09 < fikabot_> 💬 :-) 18:10 < fikabot_> 💬 I do expire posts of others after some weeks to be honest 18:10 < fikabot_> 💬 So my database is some 500 MB 18:11 < fikabot_> 💬 Running on a raspi with 2 GB :-) And ssd storage 18:12 < fikabot_> 💬 Yeah I have it set to 30 days 18:12 < fikabot_> 💬 for that 18:12 < fikabot_> 💬 and 14 days for raw conversion data (not sure what that is though) 18:12 < fikabot_> 💬 problem probably is I have the twitter bridge activated 18:13 < fikabot_> 💬 I know from my account there were huge volumes of traffic coming in from that 18:13 < fikabot_> 💬 Ah, twitter ;-) 18:13 < fikabot_> 💬 for me alone 18:13 < fikabot_> 💬 moot now since I shredded my twitter account but the other person is using it too 18:16 < fikabot_> 💬 I must state Hubzilla does allow me to star posts so those will be kept, other posts by others will be deleted after X days. I can also post with an expire date on a post. I love it to be honest. 18:16 < fikabot_> 💬 It may do that too 18:17 < fikabot_> 💬 this goes back to I wish the federating protocols had a backfill capability like Matrix protocol does 18:17 < fikabot_> 💬 would come in most immediate handy when first connecting to a user, especially when they aren't already followed by someone on that server 18:20 < fikabot_> 💬 You could actually request (backfill) previous messages now by just copy & pasting its link in the search box. 18:21 < fikabot_> 💬 And I think Friendica has a similar feature in the settings to keep favorited posts. 18:21 < fikabot_> 💬 B.t.w., nice to see you being involveld in this Hank . Always good to see people improving software 18:21 < fikabot_> 💬 I don't know how the Twitter bridge is implemented, but I would imagine that all content, post and media going through that would appear to be stored as if legitime local users generated them (i.e., never being expired). 18:22 < fikabot_> 💬 Thanks :) wish I could say I contributed to it more than just live testing 18:22 < fikabot_> 💬 here are the top tables: 18:22 < fikabot_> 💬 ``` 18:22 < fikabot_> 💬 +---------------------------+-----------+------------+ 18:22 < fikabot_> 💬 | Table | Size (MB) | TABLE_ROWS | 18:22 < fikabot_> 💬 +---------------------------+-----------+------------+ 18:23 < fikabot_> 💬 | conversation | 1031 | 236521 | 18:23 < fikabot_> 💬 | post-content | 1024 | 641643 | 18:23 < fikabot_> 💬 | post-user | 1003 | 1862771 | 18:23 < fikabot_> 💬 | item-uri | 427 | 1475097 | 18:23 < fikabot_> 💬 | post-media | 395 | 745442 | 18:23 < fikabot_> 💬 | storage | 323 | 20000 | 18:23 < fikabot_> 💬 | post-thread-user | 306 | 675892 | 18:23 < fikabot_> 💬 | post | 289 | 1261003 | 18:23 < fikabot_> 💬 | contact | 280 | 123025 | 18:24 < fikabot_> 💬 | apcontact | 237 | 131557 | 18:24 < fikabot_> 💬 | post-tag | 138 | 1313146 | 18:24 < fikabot_> 💬 | parsed_url | 120 | 96154 | 18:24 < fikabot_> 💬 | post-thread | 102 | 555131 | 18:24 < fikabot_> 💬 18:24 < fikabot_> 💬 ``` 18:24 < fikabot_> 💬 Why as a local user instead of like federated content? 18:24 < fikabot_> 💬 Sounds easier to do... 🙂 18:24 < fikabot_> 💬 possibly 18:24 < fikabot_> 💬 it should probably have a purge function then 18:25 < fikabot_> 💬 but that would be dangerous since it is all in one giant table 18:25 < fikabot_> 💬 And if federated IDs would be included, it could (wrongly) allow for various interactions with them as remote posts that (clearly) won't work. 18:25 < fikabot_> 💬 I'm also confused about how conversation got so large 18:25 < fikabot_> 💬 if that is the DM system 18:25 < fikabot_> 💬 Hank: you are using innodb aren't you? In that case be aware that optimizing only clears space in a 'tablespace'. It will not clear the storage. You need to make a backup and import that to do it (yeah, that sucks) 18:25 < fikabot_> 💬 because I checked with the other user and they are not saturating it either 18:26 < fikabot_> 💬 When you have an expire active it will sort out later then sooner but space is not reclaimed. Might fit your situation 18:28 < fikabot_> 💬 I looked at the allocated but free space per table 18:28 < fikabot_> 💬 worst table is like 60 MB 18:29 < fikabot_> 💬 Unless I'm reading it wrong 18:29 < fikabot_> 💬 ``` 18:29 < fikabot_> 💬 +---------------------------+-------------+-----------+ 18:29 < fikabot_> 💬 | TABLE_NAME | DATA_LENGTH | DATA_FREE | 18:30 < fikabot_> 💬 +---------------------------+-------------+-----------+ 18:30 < fikabot_> 💬 | conversation | 1040269312 | 62914560 | 18:30 < fikabot_> 💬 | storage | 338231296 | 9437184 | 18:30 < fikabot_> 💬 | post-tag | 64585728 | 7340032 | 18:30 < fikabot_> 💬 | post-user | 238780416 | 6291456 | 18:30 < fikabot_> 💬 | post-thread-user | 76136448 | 6291456 | 18:30 < fikabot_> 💬 | post | 125435904 | 6291456 | 18:30 < fikabot_> 💬 | apcontact | 166674432 | 6291456 | 18:30 < fikabot_> 💬 | tag | 11026432 | 6291456 | 18:30 < fikabot_> 💬 | item-uri | 220954624 | 6291456 | 18:31 < fikabot_> 💬 | parsed_url | 107970560 | 5242880 | 18:31 < fikabot_> 💬 | gserver | 26787840 | 5242880 | 18:31 < fikabot_> 💬 | diaspora-interaction | 37289984 | 5242880 | 18:31 < fikabot_> 💬 | post-thread | 46727168 | 5242880 | 18:31 < fikabot_> 💬 | post-media | 329039872 | 5242880 | 18:31 < fikabot_> 💬 | post-link | 2473984 | 4194304 | 18:31 < fikabot_> 💬 | inbox-status | 12140544 | 4194304 | 18:31 < fikabot_> 💬 | post-delivery-data | 1589248 | 4194304 | 18:31 < fikabot_> 💬 | post-content | 958201856 | 4194304 | 18:31 < fikabot_> 💬 | notification | 1589248 | 4194304 | 18:32 < fikabot_> 💬 | notify | 3686400 | 4194304 | 18:32 < fikabot_> 💬 | photo | 5783552 | 4194304 | 18:32 < fikabot_> 💬 | workerqueue | 360448 | 4194304 | 18:32 < fikabot_> 💬 | fcontact | 5783552 | 4194304 | 18:32 < fikabot_> 💬 | contact-relation | 11943936 | 4194304 | 18:32 < fikabot_> 💬 | contact | 180092928 | 4194304 | 18:32 < fikabot_> 💬 | cache | 2473984 | 4194304 | 18:32 < fikabot_> 💬 | worker-ipc | 126992 | 8 | 18:32 < fikabot_> 💬 running `OPTIMIZE` on the tables previously didn't seem to have a huge impact on that 18:32 < fikabot_> 💬 Optimize will not fix that besides memory usage (at least that was so a few years ago) 18:33 < fikabot_> 💬 oh I thought it was supposed to clean up excess table free space on disk 18:33 < fikabot_> 💬 True, but in the past the top storage usage was reserved for the database 18:33 < fikabot_> 💬 Not sure if that still is the case 18:33 < fikabot_> 💬 what is the conversation table used for? 18:33 < fikabot_> 💬 I hope for conversations ;-) 18:34 < fikabot_> 💬 I'm afraid it is for replies mapping now that I'm looking into it 18:35 < fikabot_> 💬 That does make sense, probably with an id for the top discussion and a parent id 18:35 < fikabot_> 💬 it would make sense why it is 1 GB in size and it not being some nefarious user doing some crazy shit with my server 18:36 < fikabot_> 💬 And expire might help, do keep in mind that uses cpu too 18:36 < fikabot_> 💬 And optimize once that is finished 18:37 < fikabot_> 💬 expire for 3rd party posts is already on 18:44 < fikabot_> 💬 about half of the data is Twitter data 18:44 < fikabot_> 💬 damn that's a lot less percentage than I was hoping for 18:45 < fikabot_> 💬 It almosts sounds like the expire isn't working 18:46 < fikabot_> 💬 possibly --- Log closed vie jun 17 00:00:03 2022