An Open Letter to Digital Region

As a small business based in Sheffield, South Yorkshire we take an active interest in what is happening in the region, especially when it comes to anything technology related.  When Digital Region was announced in 2009 we looked forward to seeing the whole of South Yorkshire provided with world-beating broadband speeds and thought what a great asset it would be for the region.  Unfortunately as things progressed slowly we began to realise that it wasn’t going to be all that we had hoped for, especially when we looked at the commercials from an ISP perspective.

Jumping forward a few years we recently tried to get ourselves connected to Digital Region as our current broadband provider (Be There) have had a number of issues over the last year and we’ve had enough and Origin Broadband were offering a “max” product that could see our very short line achieve speeds of >100Mbit/s down and >20Mbit/s up.  Unfortunately this just highlighted how wrong things are with Digital Region at the moment and we can no longer hold back our thoughts on the whole issue.  With this in mind, we have written an open letter to Digital Region which you can find below:

An Open Letter to Digital Region (pdf, ~310KB).

We really want to see Digital Region succeed and fulfil its initial promises, we hope that this will spark useful and interesting debate on the subject and lead to changes and improvements at all levels.


Upgrades and Additions

We’ve had a few people ask us exactly what we were doing in our recent scheduled downtime sessions during April, so here we are:

New Datacentre Facilities

The main bulk of the time has been spent moving the majority of our servers from our current point-of-presence (PoP) at Interxion in central London to our new PoP at London Hosting Centre in the Docklands area. There are a number of reasons for this move, which we weighed up very carefully with the inconvenience that would be caused to you:

  • Inability to get extra power to our racks – we were running them half empty as we could not get extra power without moving our equipment to a different part of the building.
  • A change in focus for Interxion – they are moving towards fewer, much larger corporate customers.
  • Above market rate pricing – year on year we have absorbed price increases, unfortunately the latest price increase we would have had to have passed on and would have resulted in prices rises of between 15-25% for you, this is not something we would have been happy doing and something a number of you told us you wouldn’t want to pay.

When we weighed up these 3 main factors we took the decision to find alternative facilities to run our services from, we are confident we have made the correct choice of facilities. We have always set out to work with partners for the long term, working as we did with Interxion for over 7 years.

The eagle eyed among you will have noticed a plural in the above statements, that we have chosen facilities to work with, not just a facility. Our network and services are now present in 3 geographically diverse datacentres, along with London Hosting Centre we have setup PoPs in:

Manchester

We are now present in Manchester, with our first services planned to go live during May (more of which below). We take network connectivity in Manchester from two providers, as well as having a gigabit link between Manchester and London.

Woking

We also have a presence in the Sentrum IV facility in Woking. This is a highly secure, Tier III+ facility providing excellent options for customers who need to be outside of London, but not so far outside that it causes them major travel problems.

Network Upgrades

A major part of our work has been to upgrade our network so that it is now Juniper powered in the core and distribution layers. For LHC this means using the latest Juniper MX80 3D line of routers and EX4200 switches. For Manchester this means Juniper J-Series routers and EX4200 switches. Woking is currently connected via diverse Layer-2 services back to LHC for routing – as any network connectivity in to the building is predominantly back-hauled to London first anyway, so it would currently serve no purpose carrying out routing from there.

As time passes, the needs of the facilities change and traffic levels increase we will be reviewing the equipment used at each facility, with the plan being to bring them all in-line with each other utilising the Juniper MX 3D series of routers.

Gigabit for all

During the move to LHC we took the opportunity to replace the last of our 10/100 switches and replaced them all with new 48 port Gigabit capable switches – all existing customers will now find they are connected to our network with a Gigabit port if their equipment supports it – a free upgrade that many providers charge monthly for!

IPv6 Support

A major driver in replacing all of our routers has been IPv6, it is something that has been long overdue in its implementation on our network and will be formally rolled out in June. Unfortunately many software vendors still aren’t supporting IPv6, so this does mean that for some services we still won’t be able to support IPv6 – unfortunately one of those vendors is cPanel, so for the time being our shared business hosting still won’t support IPv6. This is something we are putting pressure (along with many other companies) on cPanel to implement as soon as possible.

Increased connectivity

We’ve increased our public Internet facing network connectivity by 100% in anticipation of take-up on our new services and also to match ever increasing broadband speeds.

The Future…

We’ve got a couple of new products on the way in May, as well as making some changes to the way we do certain things – but you’ll have to come back later to find out about those :)


Network Upgrades

On the evening of Saturday 16th April from 22:00 we will be performing essential network upgrades and maintenance. This is work that we hoped to have completed before we began moving any equipment from one facility to another, but unfortunately due to lengthy delays in the supply of some essential equipment this was not possible.

As part of this work we will be installing new Juniper Networks routers and switches to upgrade our routing capability. For the curious amongst you we will be using Juniper MX80 and EX4200 routers and switches.

The main bulk of the work that is service affecting will be migrating customer VLANs from the old routers they are terminated on, to the new equipment. The impact on service should be no more than 5 minutes per VLAN in two phases and we expect much less if everything goes to plan and as per our testing.

Please accept our apologies for the relatively short notice of these works, we would not be performing them at this time if it were not 100% necessary at this time. If you have any questions at all please do let us know.


Shared vs. Dedicated Hosting

We’re asked now and again what the difference between our business class shared hosting and our managed dedicated hosting/servers is? So we thought we’d try and give a quick 5 minute run down on what the main differences are. The main thing most people notice first is the cost difference between the two services, hopefully the next few paragraphs will give you an idea of why they are so different.

Learning to share

As the name may suggest to you, shared hosting is a service that shares the hardware resources of a server/servers between a number of customers – It doesn’t mean that you are sharing your hosting account with your friends/family. Quite simply, we place multiple isolated user accounts on the same server hardware, where they are free to host as many domains as the package allows, using up to the web space and data transfer allowances each month.

As it is a shared service, like all providers we do have some provisos in our terms of service that prevent a single user from using all the CPU time or all the memory on a shared hosting service – After all you wouldn’t want someone monopolising the server and your site being slow, the same as they wouldn’t want your site to cause problems for them.

At KDA we place between 50 and 100 user accounts per web server as a maximum, we like to give every user a good portion of server resources such as CPU and memory – some providers will place 10x this many on a server, which increases the potential for problems massively.

Dedicated to the task

A dedicated server, unlike shared hosting is solely for your use – no other customers will use the same hardware. You can place from 1 site to 1000 sites (although we’d not recommend that) on your server, or you can use it just for email or databases, or serving video files if you want. As long as it fits within our terms of service and is legal, you can use it for what you wish.

At KDA we only use high quality server hardware from Tyan or Supermicro, we use enterprise grade SAS hard disks – designed for 24×7 operation in a server environment and we use hardware RAID to duplicate your data over at least 2 hard disks, increasing data security and performance.

Performance

With a dedicated server you have the ability to use 100% of all CPU and memory resources all the time if you need to (although we’d be recommending some changes/upgrades if you found that happening), unlike shared hosting where you may use only a fraction of the resources for an extended period of time – You can use more, but only for limited periods to keep it fair to other customers.

Our base specification managed dedicated server comes with a single Quad Core 2.26Ghz CPU – giving you a total of 9.04Ghz of CPU dedicated to you 100%, not only that it includes a massive 12GB of RAM dedicated 100% to you.

Reliability

In theory shared hosting and dedicated hosting should be as reliable as each other, all things being equal. Whilst our own shared virtual hosting is incredibly reliable, it is inevitable that as time goes on that at some point a website will get featured on Digg, on the TV, or elsewhere that causes it to see a large increase in website traffic – which can sometimes cause problems for other users of a server, such as their sites slowing down or in very very rare circumstances the server crashing.

With a managed dedicated server the only time this will potentially be a problem is if it is your own website getting 1000s of extra users visiting it or buying from it – which if you’re getting 100s of extra sales might not be a problem in your eyes. Of course with all those extra server resources 100% dedicated to you, you might not even see any performance issues with 1000s of extra users visiting your site or buying from it.

Features and Flexibility

With a dedicated server you have the potential to run different software compared to shared hosting. If you need a specific version of PHP/Perl/MySQL/Some other software then you can have that on a dedicated server, whereas with shared hosting that just isn’t possible – as it would have an effect on all other customers on that particular server. Not only that, but if you need to run some software that integrates with one of your suppliers, or a site search engine software service then you can do those, as the server is dedicated 100% to you.

Security

When it comes to security the fewer people that have access to a system, the more secure it tends to be. With a dedicated server we can restrict who has access to FTP, who can login to any optional control panel etc. As part of our standard server setup we restrict all access inbound and outbound to your server, except for public services such as web serving, incoming email etc. and all other potentially sensitive services are restricted to a specific set of IP addresses. With shared hosting we obviously cannot do this, as the administration required to cope with end users changing IP address all the time would require several full time staff.

All of our shared hosting systems are designed to be secure and isolate users from each other, but unfortunately you can’t always guard against unknown bugs in software used (such as web servers, PHP etc.) and there is always the potential that such a bug allows users to interfere with other users or the smooth operation of the server – With a dedicated server you are the only user.

Cost

With a dedicated server you are the only user, so you have to bear all the costs, there are no other users for those costs to be spread between – So that means we have to make sure your monthly fee covers the cost of the server itself, the power, the cooling, software licenses, staff time – which are the main reasons for the large cost differential. We realise that for many users the jump in price is quite considerable, which is why we also have an alternative that provides many of the benefits of dedicated hosting, but at a price between that of shared hosting and dedicated hosting. That solution is virtual dedicated servers which we’ll cover next time.

Please don’t let any of the above put you off of shared hosting at all, it is still suitable for the vast majority of websites, especially when implemented well and not shared between 1000s of users. As they say on Crime Watch, “Don’t have nightmares” if you’re using shared hosting, chances are it’s the correct choice for your website.


Cloud Testing: Storage Failover

As you’d expect, we’ve been extensively testing the failover and high availability features as it’s one of the key selling points of our Cloud Platform, our main area for concern has of course been data storage – without data or disk, there’s no point in having compute power really.

In terms of storage availability initially we will have a pair of SAN SUs (Storage Area Network Storage Units) with 15k RPM SAS Drives, each SU has redundant PSU and Fans, has Dual Quad Core CPUs and 32GB of RAM for Cache and boots from an SSD. Storage is configured equally over both SUs in a round-robin fashion, this balances the load over the two SUs and maximises performance – So for half of the virtual machine instances SAN SU1 will be primary and for the other half SAN SU2 will be the primary – If a failure should ever occur then each SU is configured as a mirror for the other SUs volumes, so if SU1 fails and your storage is primary on SU1 then SU2 will start serving your storage to you.

In our testing so far we’ve seen from zero seconds impact to a maximum of two seconds impact in a a failover situation – depending on the exact nature of the failure. Whilst ideally we’d like to bring this down to zero seconds impact for all failure types, unfortunately it then becomes a delicate balance between false positives (where the system things something has failed because it takes fractionally longer to respond than normal) and detecting actual failures – if we start detecting lots of failures that aren’t, then it effects the stability of the system as it flips and flops between failure and recovery – which is far worse than a second or two of actual pause in disk i/o (Note: you shouldn’t see disk i/o fail, as it is queued, it will just pause momentarily). In a maintenance situation we can take out an SU without any impact to your service at all :)

Overall the initial SAN consists of:

  • Multiple SAN SUs mirroring data for each other
  • Multiple network switches

Each SAN SU consists of:

  • Dual Quad Core CPU
  • 32GB RAM
  • SSD for Storage OS
  • Enterprise SAS 15k RPM Drives
  • RAID-10 (Disk Mirroring + Striping)
  • N+1 Redundant PSU – Fed from two separate power feeds
  • Multiple connections to multiple switches

What all this boils down to is that each SU is highly redundant on it’s own, as well as being very fast, we then add to that another SAN SU which mirrors data for it, giving even more redundancy in the system, as well as increased throughput. What it also means is that we’ll never be the cheapest for disk space – for every 1GB of disk space available on the system we have to provision 4GB of space, spread over 4 drives – RAID-10 inside the SUs, then mirrored between the SUs. For reference we are using Seagate and Hitachi 15k RPM SAS drives in 450GB capacity – considerably more expensive per GB than SATA drives, but worth every penny for the performance and reliability :)

Also, as you’d expect from us, we’re also looking at what changes can be made to see if we can bring all failover situations down to zero impact – but we’ll be doing this in our lab and it will likely appear in future revisions of our cloud hosting platform. We’re always looking to improve :)


Cloud Testing: Disk

I know quite a few of you are following the development of our new cloud hosting platform closely, so here are some very initial result from some brief disk testing. First up we have the standard Linux hdparm, nothing too strenuous, but it does give a quick idea as to disk performance:

/dev/sda1:
 Timing cached reads:   23004 MB in  1.99 seconds = 11575.07 MB/sec
 Timing buffered disk reads:  336 MB in  3.02 seconds = 111.37 MB/sec

As you can see we’re getting 111MB/s – not bad for initial test, and something confirmed by Bonnie++ – A far more strenuous disk test:

Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
karl-test3.sheff 4G 61756  79 121058  17 50688   1 51968  54 111809   0  5428   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 45927  71 315487  99  2746   2 44193  69 392805 100  2221   2

Bonnie backs up our initial numbers from hdparm, which is nice to see – and does so without using 100% CPU for either reads or writes.

These are very preliminary numbers – we’ve not even got multipath running to the SANs yet or the HA going – in theory we could get 4x those numbers with both of those items up and running. We’ve also not got all the disks running on the SAN either for those tests, in fact that’s only running off of 4 disks, in production each SAN will have 8 disks in the SAN head end plus at least 1 x 16 Bay Disk Tray as well.

We’ll have more numbers as the testing progresses, also if there is anything you’d like us to test then please do let us know.


It’s raining hardware from the cloud

As promised, we’ve got some pictures of some of the hardware we’re using in our cloud hosting platform that will be used to support our business class web hosting as well as provide cloud based solutions to you. I apologise for some of the pictures – even with 5MP the iPhone still isn’t quite the great photo taking tool it should be for the money.

First up we have some factory fresh ECC memory – 192GB to be precise:

Next up we have one of our SAN head end boxes, probably the most important component in the whole of the cloud platform:

Inside the SAN head end boxes we’re using Adaptec 5805 and 5085 SAS RAID Cards – These provide us with 8 x Internal SAS ports, as well as two x12 SAS expansion ports for connecting up disk trays to. Once we’re done testing we’ll be adding disk trays with up to 24 x 15k 450GB SAS drives per tray.

The next most important components and the ones that will actually run the cloud computing are the hypervisor boxes, here you can see two of them next to each other (minus CPUs):

Just in case anything should go wrong, we have our backup NAS system, I don’t have a picture of the 1U head end box, but we do have pictures of the 24 bay disk trays that we’ll be using for them:

That’s all for now. When we get a minute we’ll get some pictures of all the kit racked up for you, and maybe even some video (we know a lot of you like flashy lights :))


Not your average shopping list & free bubbly!

This week we’ve been getting the hardware ready to test our planned new cloud hosting platform and as you can imagine it’s not been your average shopping list. So far this week we’ve purchased:

  • 408GB of RAM
  • 200.48Ghz of CPU
  • 240GB of Solid State Disk (SSD) drives
  • 6TB of 15k RPM Enterprise SAS drives
  • 6TB of SATA drives
  • 8 x Server Chassis for Hypervisors
  • 3 x Server Chassis for Storage
  • 1 x 24 Bay Disk Tray
  • 3 x 8 Internal Port Adaptec RAID Cards
  • 3 x 8 External Port Adaptec RAID Cards
  • 8 x Dual Port Intel Network Card
  • 3 x Quad Port Intel Network Card

So, not your average shopping list by some margin and the bank managers’ face is looking a bit grim right now as well. It’s shaping up to be a fun couple of weeks building and testing all this kit – although I’m just wondering if I can sneak it all for a new PC :)

Once our new cloud platform is up and running we’ll be migrating all of our business class web hosting over to it – so you’ll benefit from our extensive investment in our cloud hosting platform even if you’re not utilising it directly, just by making use of our business web hosting service from as little as £50 per year.

We’ll post some updates and pictures when it all arrives, for those of you whom we know like to see lots of hardware – in the meantime I look forward to all the interesting suggestions that we’ll no doubt get for what else we could use all the hardware for – The most interesting posted to the comments by this time next week gets a bottle of Champagne.