accVIEW Rejuvenated

Well, it’s been way too long since I opened an editor and got to work on accVIEW’s source, and it really showed. In reality, accVIEW was something I slapped together in an afternoon for Vanguard Frontiers, home of myself, PyjamaSam (of Capsuleer fame) and some of the best pilots I’ve ever flown with. We needed a better way to do API checks and this was it.

I made it public and popularity grew. I added some features, added the premium option for those who wanted a bit more, and it’s been ticking along, occasionally throwing horrible errors and falling over, the background worker regularly falling over and dying, and running on a Quantum Rise datadump. And there was a major security glitch- we didn’t store API keys, making it impossible to validate people regularly, meaning people who left corporations could still view their old corp’s requests. And they couldn’t update their account to their new corporation.

No more.

accVIEW has gotten a fresh new facelift, skill distribution graphs, a fundamental API key change, some improved code throughout and a new database dump update. I’ve also added a ‘forgot password’ feature for those who don’t remember their logins too well, and fixed a few outstanding bugs.

If you’re an accVIEW user, next time you log in you will be prompted for your API key again. This is to be expected; the reason we’re doing this is so we have a copy we can re-validate regularly (once a day) to ensure that you are still in the corporation you were in last time we looked. If you change corporations, your main character will be dissociated and you’ll have to reenter your API keys next time you log in and choose a new main character.


Varnishing over varnish

Well, we’ve tried working with Varnish and we’ve given up. After desperately trying to make Varnish play nicely with everything else on the system, we’ve given up and removed Varnish from our application stack entirely. Why? Memory architecture.

Part of the documentation on Varnish’s website is a long architectural explanation that the OS should handle what stays in RAM and what gets swapped to disk, and that Varnish thus should not do any memory management as such. There is a problem, here, however. This design means Varnish will basically assume that the OS will handle contention between itself and other programs.

This is not a smart move. First off, some OSes are terrible at that sort of thing. Linux is pretty good. But here’s the real issue; take a database server like PostgreSQL. PostgreSQL correctly lets the OS handle disk caching rather than replicating efforts internally. This is a great move and means that you don’t have to guess how much RAM you can let PostgreSQL take up for disk caching; the OS handles it all. Since it’s just caching, sometimes that space can be reallocated to programs which need some RAM, and later given back to PostgreSQL (or any other app).

varnishd was regularly climbing to around 4-6 gigabytes of RAM usage, forcing even application memory into swap, and completely removing any memory from the OS for disk caching, having a terrible knock-on impact on performance of PostgreSQL on the same machine. I should point out that the 4-6 gigabyte figure was obtained while running varnishd with a 1 gigabyte disk cache.

Basically, if you want to run Varnish (and there are many good reasons to; it’s a fantastic cache server other than this issue) you need a dedicated machine to run it. The architecture of the software makes it impossible for it to coexist on a server with other programs. We even tried having Monit restart it when it reached 1 gigabyte of RAM usage, but it still had a terrible impact and the caching was impacted by it. While having a 45% cache hit on Varnish was a lovely thing, and helped reduce load on our backend servers, it was slowing the backend servers down enough for that to not really work out at all.

With the 1 gigabyte of RAM we freed by removing Varnish, we’ve added four more application servers to EVE Metrics. These are more than coping with demand, and we’re happily seeing things stay nice and stable even with a lot of API accesses. So far, then, so good.

On a side note, users of the popular accVIEW application will be happy to know I’m spending a chunk of time this weekend improving the app and adding some very much needed features, like persistent API key storage for users so that corporate security can be maintained even when people leave corporations or join new ones, forgot password features, and performance improvements.

Of OLAP and T3 (Plus more on projects)

Blimey, it’s been a while since my last post. I hasten to add that this delay comes only by virtue of the fact that I am exceptionall busy with various projects right now. I thought an update might be appropriate, in any case.

I’ve spent most of my time working on EVE Metrics. There’s some very cool, very powerful changes coming up soon; early Feb saw the introduction of much more accurate prices and indexes, with a newly improved algorithm for calculating the average prices of items. But even better is some of the new stuff coming in the next few weeks- notably the implementation of a fully-fledged OLAP warehouse for EVE Metrics, which will open up some very awesome possibilities in the long run.

Also under the scalpel this month has been the API system. EVE Metrics will support full and limited keys when it comes to the API. However, there will be a quirk; if you want to make use of other people’s API data, for example to see more detailed market analysis with transactions hooked in and so on, then you’ll have to share your data. This means if you want to benefit from other people’s data, you must reciprocate and share your data for the benefit of others. Your data will, in all cases, be used for global averages, but entirely anonymously; for example, the number of transactions per day on a given item may include data from your API, but nobody would know it. This system will hopefully encourage users to share data more often than hide it. I’m planning to make this an opt-out system, with the choice to opt-out given on the API key page as part of the form. It’ll be really hard to miss, and those who are paranoid or wish to hide their activity completely can check the box to opt out.

I released accVIEW a few days ago, and it’s had quite rapid takeup from corporations. It’s a service that lets you perform background checks on prospective new members to your corporation- the basic tool lets you view skills, characters on an account, and various bits of information like their corp details, CEO, and so on. The premium version (For the low cost of 150mISK) lets you see the applicant’s wallet journal (with tools to show suspcious transactions and filter the results), as well as their recent kills/losses.

EVE’s M10 expansion, Apocrypha, should be awesome. Tech 3 is going to be great- lots of people complain about the skill loss and the fact that it’ll make FCing a nightmare. Well, no. The skill loss makes sense, and provides an interesting new dynamic to EVE. FCing- well, those who complain about T3 making FCing impossible are evidently not up to scratch as FCs. It’ll give FCs a real change and challenge for the first time in years. The wormhole stuff will be an interesting thing to watch pan out- there’s lots ot potential there, and it could end up being a lot of fun…

Nexus is progressing slowly but surely; the large amount of data we’re gleaning via an API-scraping installation for Sc0rched Earth is helping no end with development, and we’re busy tidying things up behind the scenes and refining some of the interface to make more sense. Once I’ve gotten ActiveWarehouse’s ETL library working properly, my next step will be to break out Photoshop and my text editor- EVE Metrics, ISKsense, maybe accVIEW, and the new MMMetrics site (Which will be launched soon) are all going under the knife and getting a serious facelift. And then it’s on to even more awesome stuff for EVE Metrics!