In case you haven’t noticed, the Interactive Alch Table has been acting up lately, displaying incorrect prices and, in some cases, a price of 0!
Obviously, a price of 0 indicates something is wrong code-wise, but a price that’s just slightly inaccurate – I think they were a day or two behind at some point – can cause major headaches and potential loss of cash (very negative profit) if you don’t pay close attention and compare values between the table and the game, just to be sure.
As I write this, it looks like well over a quarter of the items in the database are listed as having a price of 0, and therefore a “profit” equivalent to exactly their alch value. This is obviously not the case.
I don’t think this problem is on JaGEx’s end. I am able to execute the automated download/update script on my local machine, but can’t test the insertion/update functions without being connected to the database itself. I cannot currently do that remotely, but even manually running the scripts (via ssh) doesn’t work. If anything, it looks like it makes things worse!
The problem is almost definitely this:
The guilty bit of code in question (line 103, to be exact) is
$price = end($obj['daily']);
where $obj represents an ephemeral memory object of the decoded JSON data. In PHP code, the decoding is done in nanoseconds and is hardly resource-intensive at all. What’s strange is that it’s the same collection of items every time, and they’ve nothing in common. It’s also not consistent. Sometimes it works, other times it doesn’t.
Of course, this only started after I expanded the database (doubled the count of items, just about) and used a new “expanded” table for queries. Not sure where the issue lies at present. It’s not permissions, I checked that all out as well.
At first, it crashed my rendering engine, so consider yourselves lucky, I guess.
There’s a decent chance, unfortunately, that the entire mySQL backend was corrupted – at least in the VSC database – when I ran out of disk space last week. I had messed up a crontab such that instead of dumping the databases to files on a thrice-daily basis, the server was tarballing snapshots of the ENTIRE web-facing directory and pushing them to Dropbox – also thrice daily. Bandwidth usage was still low enough that no alarms were raised, but the disk eventually got so full that the system was unable to log error messages stating that the tar operation failed because there is no disk space. Yikes.
That’s what you get for selecting the default 7GiB root volume. Oops!
I’ll have to investigate this more in-depth. Until then, apologies for the inconvenience!