Push some code, deploy it, have a beer and cross your fingers. If something breaks, your users will surely use the neat feedback form to report the bug and maybe invite you to their birthdays while they’re at it.
I can’t stress this enough. Testing is important. While your code grows, it gets extremely difficult (and tedious!) to manually test everything.
“I’m a rockstar/ninja, my code is rock solid!” – whoops, no payment processing this weekend
“just a small fix” – after partying hard for 8hrs straight @STYLIGHT party
“it’s just frontend, man” – crap, where did the navigation go ?
We’ve all been there.
Writing tests with QUnit is pretty straigthforward:
More sophisticated and real-world examples can be found in the cookbook.
Now that you wrote your first unit test you probably want to run it. As described in the docs, create a test.html file, include your tests and check the results in the browser. While the result page is pretty cool, opening a browser every time you want to run your tests gets old fast. It’s also not suitable as part of a CI strategy, think jenkins/hudson. The answer is phantomjs.
From the website:
Available for linux, osx and even windows it’s a command line tool that coupled with the grunt-qunit plugin makes running your tests a joy. Add a qunit target to your gruntfile and you’re good to go.
In case you’re not using grunt (you should!) you can run the testrunner from the console as described here
This will run your code w/o opening a browser and spit out debug information and test results to your console. It’s even possible to have the results saved to disk as .xml so your CI dude can play with it. Of course, this assumes your code is written in a testable way. I’d suggest reading up on it here:
Parrot AR Drone 2.0 (via http://tbreak.com/tech/2012/07/quick-look-parrot-ar-drone-2-0/)
On April 21st and 22nd, the Make Munich took place. A fair for all DIY aficionados, hobbyists, hackers and makers. There were lots of companies showing off what cool and creative things you can do using 3D printers and laser cutters and other technologies. On many booths you could get your hands dirty and “make” something yourself. Besides all this mesmerizing stuff that immediately makes you recall your creative childhood days, also a hackathon took place focused around Chrome Packaged Apps and the Parrot AR Drone. Some of us formed a team and participated, because who doesn’t like to hack some drones? We ended up hacking together an app that lets you control the drone via tweets. Admittedly a very limited, but nonetheless quite fun way to interact with the quadrocopter. In the end, we got to take home some prizes!
STYLIGHT @ ChromeDrone Hackathon: Dimi, Sergii and Dom
The Parrot AR Drone is a quadrocopter, controlled over WiFi, that features an 720p/30fps front camera, as well as a vertical QVGA/60fps camera. The later being used for ground speed measurement. It also has a 3-axis gyroscope, a 3-axis accelerometer and a 3-axis magnetometer, a pressure sensor and additional ultrasound sensors. You can check out the full spec at the official site. Parrot also provides an SDK, which enables you to write your own software for the drone.
A few months ago, Paul Lewis and Paul Kinlan from Google demonstrated how they managed to control the drone via a gamepad from within a Chrome Packaged App using the new networking APIs of Chrome, as well as the Gamepad APIs in HTML5.
There are three UDP sockets and a TCP socket you can connect to in order to send and receive data from the drone. The drone sends navigation data, such as status, position, speed, engine rotation speed and the likes to the client on UDP port 5554. The video stream is transmitted over TCP on port 5555. Control and configuration data is send to the drone via AT commands (text strings) on UDP port 5556. The commands need to be send on a regular basis. For smooth movements at least every 30ms. The drone considers the WiFi connection as lost, if there are no consecutive commands within less than 2 seconds. The string commands basically consist of a command type and some arguments. For instance the progressive commands for movement take the percentages of the maximum configured angles for roll, pitch, gaz, yaw. I won’t go into more details on the commands here though, you can read up on the exact syntax in the docs bundled with the Parrot SDK.
Since we had never before played with the Parrot AR Drone or with Chrome Packaged Apps, we figured to start out simple and get a hang of the Parrot SDK and Chrome APIs. We initially had lots of ideas that involved the drone’s video or photo capabilities, but since the video feed wasn’t working properly yet, we scratched those and focused on different interaction methods. As mentioned earlier, controlling the drone via a gamepad had already been implemented, so we came up with the idea to use Twitter to control the drone. Tweet2Drone was born. The idea is to subscribe to either a hash-tag or a Twitter handle, which is then used to search Twitter periodically for tweets. The retrieved tweets are then put in a queue (DRONE.TweetQueue) which is in turn also periodically processed. If the tweet contains a specific command, it is send to the drone, if not, it’s just dropped.
We based our app on the code made available on github and stripped out the gamepad support. We provide a single input field in the GUI that let’s the user input the hash-tag/handler which should be used to search Twitter for commands and store it using Chrome’s Storage API, so you don’t have to enter it every time the app is restarted. As a bonus, we use the sync feature, which means, that input field is synchronized across all Chrome browsers you are signed in. This sounds like a lot of work, but it is actually super easy, basically you just specify you want to use storage.sync instead of storage.local and Chrome does all the rest for you!
For better control, we added a ‘start’-button to the interface, which triggers the init command of the drone’s API, establishing the socket connections and doing some basic configuration. It also reset the queue in case it contains tweets from a previous run, and starts searching Twitter. The Twitter search is a basic ajax request to ‘http://search.twitter.com/search.json?q=’ with our configured hash-tag/handler as the search query. We then process each individual tweet returned and check if it was previously inserted in the queue. If yes, we disregard it. If not, we extract the command from the message and push a json object containing the actual tweet message, the command and some meta data onto the queue. Afterwards, we use setTimeout to call the search function again.
The queue in turn pops and processes items every 3 seconds and calls the DRONE.Translator. The translator checks if a known command is present and executes the associated DRONE.API call. We chose this architecture, since it will allow us to easily extend the project and support other input methods such as text or speech.
Tweet2Drone in Action
The user get’s visual feedback in the interface about all tweets that were processed, as well as an indicator if they contained a recognized command or not. In addition, the interface has global notifications for connection related messages, e.g. when the emergency stop button was pressed, which causes the drone to land immediately and closes all socket connections. We also augmented the API to support the official emergency stop command, but deemed it a bit to brutal, since it cuts power to all rotors, no matter how high the drone is currently flying, causing it to drop like a dead bird.
This little hack is far from finished, since we only recognize and execute a few commands clean. We still have some trouble with timing rotation right and actually moving the drone in any direction in a controlled manner, but we had less than 24h to hack this up and get acquainted with both Chrome Packaged Apps and the Parrot AR Drone. The code can be found on github at https://github.com/lc0/Tweet2Drone. Since we now have a drone to play around with it, we might put some more work into it and polish it up. Feel free to contribute to this little hack!
Image courtesy of hp.com (http://www.hp.com/hpinfo/newsroom/press_kits/environment/images/reconditionedpcs.jpg)
The last couple of months have been quite amazing for us, we introduced boards to our platform and in addition, we launched STYLIGHT in four more countries (France, Italy, Sweden and the UK). This expansion also caused our team to grow quite a bit, since we have dedicated positions for business development, content & community, as well as SEO and SEM for each country. And we are continuously looking for more awesome people to join us!
Of course, all these new people need a working and fully configured computer in order to start being awesome here at STYLIGHT. Every once in a while existing machines break down and need to be re-installed or get exchanged by new and speedy laptops. Up until recently, we used to install and configure all our computers manually. While most of our configuration is pushed via the group policy in our domain, software such as the Microsoft Office suite or the most up-to-date Chrome and Firefox browsers, as well as other tools particular to each department, had to be installed by hand. This worked quite well so far, but we have grown to a size where it would be nice to automate the process more (or even fully), so we in the engineering department can focus on coding new and exiting stuff instead of installing computers.
Slipstreaming & Customizations
We wanted to start out with a small automation and gradually build up. One thing that always eats up time when setting up a new PC is making sure that all the drivers are installed correctly. Hence, we first integrated drivers into a custom Windows® 7 installation DVD. Microsoft provides its own tool set for this purpose: the Windows Automated Installation Kit (AIK). The kit supports the configuration and deployment of Windows 7 and Windows Server 2008 R2. It also supports a variety of different other tasks, including volume activation management , user state migration and the likes which are not particularly interesting for us.
Unfortunately the kit relies on ImageX, a command line tool for capturing, creating, modifying and applying Windows images, which is not really handy. There is however a variety of programs available that offer you a more intuitive GUI and utilize ImageX under the hood. We settled on RT Se7en Lite, a nice freeware tool which lets you integrate drivers, Windows updates and hotfixes, language packs and 3rd party applications with silent installers. It also allows you to remove unwanted components from Windows, tweak various system settings, customize Windows and create unattended installations. While we were at it, in addition to slipstreaming drivers, we also included the latest updates and hotfixes and customized the standard logon screen to be a little bit more STYLIGHTish.
Custom Windows 7 logon screen
You could do all that just with Windows AIK alone, it just requires a lot more effort. We got the most recent driver packs from http://driverpacks.net/. For obtaining offline versions of the Windows updates and hotfixes, we used another nifty tool called Windows Update Downloader (WUD). Not having to install any updates and drivers straight after installing Windows is quite nice, but what’s typically really annoying about the Windows installation is the various inputs you have to make, e.g. for regional settings, computer and user name, which are unfortunately spread out through the installation. This is were unattended installations come in.
Basically all you need for an unattended installation is an XML file that tells the Windows setup what to use as an input for the various information it requires. The XML file must be called “Autounattend.xml” and has to be in the root of the installation medium for the Windows setup to pick it up. As already mentioned, you can create unattended installations with RT Se7en Lite and also using Windows AIK. We actually found the Windows tool more comfortable this time. However, without a tutorial or the reference you are basically left alone in the dark.
I’ll just cover some of the more interesting sections of the XML file in the following, but will link to some good resources down in the bottom. You can control every single step of the installation, including the formatting and partitioning of the hard drive, which can be quite dangerous. Since we store all our important files on servers, including the Windows user profiles, it is not a problem in our case if the unattended setup wipes the disk and creates a fresh partition for the installation. This can be achieved with the <DiskConfiguration> section. It is part of the Microsoft-Windows-Setup component and looks like this in our case:
In the <CreatePartitions> block, two partitions are created. The first is a small 100MB system partition required by Windows 7 for storing boot and recovery info. It’s size is limited to 100MB by the <Size> tag. The second partition fills the remaining space on the HDD, which is achieved by setting the <Extend> tag to true. In the <ModifyPartitions> block we just format the previously generated partitions and label them. The partitions are referenced by the number assigned in the <Order> tag. and obtain a <PartitionID> tag, which is later used to tell the Windows setup on which partition to perform the installation.
Since we don’t have a volume licence activation key, we also provide a trial licence key in the XML file in order for the setup to run fully unattended and later only have to activate the computer using the appropriate licence key. In addition, we set the regional settings, setup the administrator account and silence all setup dialogs. Microsoft provides two sample XMLs, one for a minimum and one a maximum unattended install here.
Put the Autounattend.xml file in the root of the DVD generated by RT Se7en Lite et voilà, you have a fully unattended and slightly customized Windows 7 installation medium. This is already quite nice and time saving, since all you have to do is put in the DVD, boot from it and continue hacking on more awesome stuff while the computer installs itself. Once the setup is finished, you log on using the administrator account set up in the XML file and can activate the installation using a real licence key and add the computer to the domain.
So far so good, but still we need to install Microsoft Office, our corporate identity fonts and templates and a bunch of other software. Of course we didn’t settle for doing this manually after we had fully automated the OS installation. Again, if you are a hardcore Microsoft admin, you would probably use just Sysprep and ImageX to automate the software installation as well. This seemed a bit of an overkill for us and we also wanted more flexibility. Luckily we stumbled across the Windows Post-Install Wizard (WPI)
Automated Software Installation
The Windows Post-Install Wizard is a nifty little tool that allows you to configure different commands to be executed. This allows you to copy files, change registry entries, execute batch scripts and run setups with the appropriate silent flags. When you run the wizard, you can both setup different configurations or choose one to be executed. Each configuration can contain different tasks, which in turn consist of one or more commands. Tasks can be grouped together in categories (e.g. Applications, Drivers, etc.). This is helpful, as it allows you to later exclude all tasks of one or more categories in a configuration to be executed. WIP is much more powerful, as it allows you to use conditionals when executing commands, but also define dependencies between different tasks and much more.
As you can see, besides some tasks for installing different software, we also copy all necessary files and apply our corporate identity MS Office theme via WPI
Example of executing a silent Chrome installation via WPI
WPI allows you to select/deselect categories or individual tasks on a configuration level.
As you can see in the pictures we easily integrated several silent setups. We currently install the Adobe Reader, Adobe Flash plug-ins, as well as the Google Chrome Browser, Mozilla Firefox, Skype and our Anti-Virus client. In addition, we perform a silent installation of Office (which uses a config.xml file similar to the Autounattend.xml for managing the setup) and several Proofkits and copy our office corporate identity theme from the server over and apply it. Since we also don’t have a volume activation key for the MS Office suite, we again use a trail key for the setup and have to activate it later on using a regular licence. Recently, we got a big batch of brand new speedy Lenovo laptops with SSDs. Since we didn’t have any of the drivers in the driver packs that we slipstreamed into our OS installation medium, we just integrated them via WPI. Since we host the WPI executable on one of our intranet servers, we can just connect to the network after the OS has been set up and have it install all the software and drivers with a single click. It even joins the computer to the domain for us.
So to recap our process for setting up a new pc so far: Insert and boot from DVD and let Windows 7 install itself. Log on as the administrator, connect to the network share where WPI is located and execute it. Automagically, the most common software is installed and the computer auto-joins the domain and reboots. Log on again and activate Windows 7 and Office. Done. Easy as pie. However, there is one caveat: the installation DVD. Us techies tend to be a bit messy at times and easily loose stuff, especially shiny DVDs that might not have been labeled properly. No need to fret though, there is of course also a solution for this problem: network installations.
As you would expect, there are again several ways to install an OS via the network. There is of course the solution by Microsoft themselves via the Windows Deployment Services (WDS), which allows you to deploy Windows operating systems over the network either as .WIM files or image-based setups. It also allows you to distribute driver packages and other software to clients. A free and open-source alternative is OPSI, which is short for “Open Pc Server Integration”. Both seemed to be a little bit of an overkill for our purpose. Especially, since we had already made use of the other tools to automate our deployment. Luckily, we again found a smart little tool, that does all the things we need (and way more!) and does them great: SERVA.
SERVA is an all-in-one portable multi-server. It’s just one executable that let’s you launch HTTP/FTP/TFTP/DHCP/DNS/BINL/SNTP and SYSLOG servers. For our purpose of setting up a PXE network installation, we only configured a TFTP server and a DHCP proxy (since we already have a DHCP server in the network). For the TFTP server, a root directory needs to be configured that holds the files it should serve, basically the location where we put our previously customized Windows 7 installation files. The requests of PXE clients need to be routed to the machine hosting the installation files, we make use of SERVA’s proxyDHCP setting and enable the BINL (Boot Information Negotiation Layer) option. Setting this option will cause SERVA to create some specific folders in the TFTP root folder the next time it is restarted. More specifically, it will create a WIA_RIS and a WIA_WDS sub-folder. These folders have to be shared on the network and populated with the OS installation files. In our case, we had to copy the previously generated Windows 7 installation files into a separate folder inside the WIA_WDS folder. Since these two folders are the locations that will be passed to the PXE client, they need to be made accessible from the network. In order for SERVA to automatically provide the right information to the PXE client, it expects the network shares to be named “WIA_RIS_SHARE” and “WIA_WDS_SHARE” respectively. That’s basically it! After another restart of SERVA, it will automatically creates a preinstallation-environment and boot images for each installation-folder in the WIA_RIS and WIA_WDS folders. In the newest version (v.2.1) SERVA will also integrate the Autounattend.xml file into the boot image and it provides several options on how to include the network share credentials, which allows for a fully automated network installation!
Now our deployment for client PCs boils down to: boot the pc into PXE, execute WPI to install additional software, drivers and customizations once the OS has been installed and finally activate both Windows and MS Office using regular licence keys. Done! And what is great about this setup is that it gives us so much flexibility! We don’t need to burn multiple DVDs just to install multiple PCs in parallel, we just have to restart SERVA and let it regenerate the boot images whenever we make changes on either the installation files or the Autounattend.xml file and we can easily add/modify programs being installed via WPI!
If you are passionate about DevOps and PC administration or even better, if you think we could totally make our client management more awesome, feel free to drop us a line or maybe even apply for a job here at STYLIGHT!
Let us show you! We took some snapshots this morning at the office Unfortunately instagram doesn’t offer an API to process images, so we just created an account. Follow us if you fancy that. Let’s start:
That’s Sebastian and Matze at work
Our typical workplace has:
A Macbook with OSX or a workstation running either Linux or Windows
Two 24″ screens. We’re experimenting with going to three right now. One for the website, one for the editor and one for logs.
12-16GB RAM, 256GB SSD
Developer VM. We run our code in VMs which get (nearly) the same config as our production machines using puppet
Headphones. Although our office is nice and quite, we all enjoy listening to music every once in a while. We even have our own remixes.
Sometimes we mix in some treats:
Each engineer in turn runs on their very specific fuel:
Be it Red Bull…
…or coffee coming from our awesome coffee machine.
You know, we are all about numbers
So is our coffee machine too
That’ a pretty huge coffee count, to be precise: 8525 coffees, and that’s counting since three months ago when we moved into our new office. Before Christmas we had been really busy releasing our latest product feature – the boards – so setting up the office lagged a little behind, but now we’re catching up fast.
Our latest installation
…are swings which are an awesome opportunity to take a 5 minutes break. See for yourself:
As you can see, there’s much space in the new office
That’s because it’s located in an old printing factory. But it’s right in the city center, just 50 m from the subway station “Mailingerstrasse” (two stops off central station).
Lunch Time is the best time
If you zoom right in, you see we are surrounded by small food places and eateries. We usually fan out to our favorite places (we like the Leibspeis a lot, but also the burgers at Hans im Glück, the Vietnamese restaurant Pho or the little Italien place Grasso, with its delicious Tris di Pasta) and then gather in the atrium to have lunch together:
Aren’t those all great reasons to pay us a visit?
Just come by for a coffee or stay longer – we’re hiring! Also, our next company event is around the corner, follow us on twitter to get the exact location and date. Here is an impression of the last STYLIGHT party
Last year has been a fun ride for all of us at STYLIGHT. I’d like to kick-off our blog with looking back what we should have avoided and what worked really well. Here it goes!
Top 3 new tools we used
New Relic I knew it existed, but I finally gave it a spin after being to the Hamburg Developer Conference. Seeing it in action was awesome, it’s just like having a profiler running always. And with a really good user interface! Here’s what our response time looked like before and after we deployed New Relic:
Guess when we installed it and found our performance bugs. Still it’s expensive but we found it totally worth the money. But do talk to them on volume discounts!
Even though there is puppet for managing servers, every so often we need automate things with respect to the order of the task just sometimes need a more lightweight solution. That’s where fabric comes in and has taken its place in the home folders of our engineers by storm. Give it a spin, be it rebasing and pushing to github or a rolling deploy of servers, it’s all just a few lines. And there’s really easy support for doing things in parallel!
It’s scalable image processing in the cloud. It’s an intriguing idea: Use CloudFront and have as an upstream the image processing script of your own. Then just set the expiration date far enough in the future, that you essentially need to compute the image only once, because now it’s cached for all times. Voila, an easy to use image hosting, which can perform your custom transformations via URL. Of course their service overs way more than just some simple transformations. Check it out!
Top 3 stupid mistakes we made
Maxing out INT
Hitting the boundary of INT on an auto_increment field by using INSERT INTO … ON DUPLICATE KEY UPDATE which really does increase the auto_increment counter even though it updates only
All those special chars
Generating a search engine friendly URL from a string with non-ascii characters: did you know that in Sweden you’d do ä -> a but in Germany you’d have to convert ä -> ae. Isn’t the pluralism in Europe fun?
Just signup with GMail…
Using private GMail accounts for everyone in a 50+ people organisation. When we first started STYLIGHT we just operated out of our private GMail accounts. Once we grew, we just recommended everyone GMail, so people signed up. Even though we have a Google Apps account for some years now, not up until mid last year we took the effort to migrate everyone. But by then it was almost too late. Ever tried to migrate a complex Google Docs setup with permissions and like? Nightmare and Google doesn’t offer any tools! Scripting certain things on your own works, but we finally made good experience with shuttlecloud
Top 3 things we replaced
Ok, not everywhere and we only managed it half-way through up until now. We’re moving it all to python now. Once we’ve completed the transition, we share our learnings
We’ve switched from Master-Master replication + mysql proxy to Percona cluster + ha_proxy. Goodbye replication lags! We’ll post soon more details about our migration
Jumping on the bootstrap bandwagon really paid off for us. Not only did we shorten the development times on the frontend (and got responsive design as a bonus), but we also dramatically beautified the look of our backend tools. It’s fast forward from no CSS to slightly styled bootstrap – what an eye candy!
What did you change within the last year? Let us know on twitter: @stylight_eng!
37Signals machts vor, da wollen wir nicht nachstehen: Ab sofort findet man hier (http://numbers.stylight.de/uptime/) immer unsere Uptime-Statistiken für die letzten 30 Tage!
Die Uptime bezieht sich auf die Erreichbarkeit von stylight.de und das Vorhandensein gewisser Elemente auf der Seite. Auch geplante Wartungen, so sie die ganze Seite betreffen fliessen hier ein. Dem Nutzer ist es schließlich relativ egal, warum er die Seite nicht nutzen kann.
Die angezeigten Werte stammen aus unserem externen Monitoring, das von Pingdom kommt. Intern sind natürlich öfter mal Server nicht erreichbar bzw. werden gewartet. Aber dann sind diese rechzeitig davor von der Last getrennt und so beeinträchtigt es den Nutzer nicht. Wer übrigens selbst seine Uptime reporten will, das geht mit “Share this data” ziemlich einfach bei Pingdom. Wer traut sich noch seine Uptime öffentlich zu zeigen?
Seit dem ersten Panda-Release in den USA wird kontrovers diskutiert, warum manche Seiten als Gewinner bzw. Verlier aus den Updates hervorgegangen sind. Auch wir bei Stylight haben fleissig gerätselt, seit letztem Freitag ist es soweit und die Ranking-Algorithmus-Änderung sind auch bei uns in Deutschland angekommen. So wie es bis jetzt aussieht, gehören wir immerhin nicht zu den Verlierern. Aber was hat uns beschützt?
Von Google hört man nur, das die Seite für den Nutzer sinnstiftend und informativ sein soll. Wir geben uns in dieser Richtung viel Mühe, aber ist das auch messbar? Vermutlich wird nicht jede Seite von einem Google Quality Rater ins Visier genommen und danach “abgestraft”. Bei seo-book.de gab es einen spannenden Gedanken, ob nicht Onsite KPIs wie die Bounce-Rate als Indikator für eine gute Seite dienen. Da ich vorallem Zahlen Vertrauen schenke und es in diesem Blog nunmal quantitativ zugehen soll, soll dieser Post quantitativ signifikante Indikatoren für eine Panda-Abstraffung untersuchen. Hierbei soll SEOmoz’s Search-Ranking-Factors als Vorbild für quantitative Aussagen dienen.
Methodik und Datenbasis
Als Ausgangsbasis nehmen wir eine Liste der Panda-Gewinner und Verlierer, in diesem Fall die Version von Searchmetrics. Damit haben wir zwei Cluster, zum einen den der Verlierer (n = 62), zum anderen den der Gewinner (n = 20). Daraus haben wir für Verlierer das Label “1″ und für Gewinner das Label “-1″ abgeleitet. Für die insgesamt 82 Domains haben wir die Alexa-Daten zusammengetragen. Im ersten Schritt haben uns hier besonders die Onsite KPIs interessiert:
Alexa Traffic Rank (global)
Daily Reach (in Prozent, global)
Pageviews (in Prozent, global)
Pageviews pro Nutzer
Bouncerate (in Prozent)
Um kurzfristige Effekte heraus zu filtern haben wir jeweils die 3-Monats-Durschschnitte verwendet. Hier haben wir nun jeweils die Korrelation zwischen der Variablen und der Zugehörigkeit zu Clustern berechnet. Das Ergebnis, der Korrelationskoeffizient ist folgendermassen zu lesen: je größer, desto stärker ist der Einfluss der Variablen auf das Abschneiden der Domain bei Panda. Zusätzlich haben wir das Signifikanz-Niveau mittels T-Test bestimmt (zweiseitiger Test, ungleiche Varianz). Allgemein wird ab einem Wert < 5% (1 - Signifikanz-Niveau) von einer zuverlässigen Aussage gesprochen.
Hohe Bouncerate als Indikator für Panda-Abstrafung
Vier der KPIs haben sich nach dem T-Test als signifikant heraus gestellt (alle Daten siehe Tabelle am Ende des Posts) und sind somit statistisch gesehen Faktoren für eine Panda-Abstrafung. Der Pfeil nach der KPI zeigt an, ob ein höherer Wert der KPI auf einen Verlierer deutet (Pfeil nach oben) oder eben eine kleinere KPI eher die Domain als Panda-Verlierer dastehen lässt (Pfeil nach unten). Unsere Ergebnisse decken sich mit den Erkenntissen von seo-book.de, wonach eine höhere Bouncerate ein Indikator für Panda-Abstrafung sein könnte. Dies ist auch der vergleichsweise stärkste Indikator, wobei die anderen Onpage-KPIs Pageviews pro User und Aufenthaltsdauer auch eine deutliche Auswirkung zu haben scheinen. Das der Traffic Rank auch einen Einfluss hat, erkläre ich mir damit, das Google zuerst versucht hat die großen Fische mit schlechten Content haben anzugehen (“Content Farms”). Wenn man wie hier Korrelationen betrachtet muss einem klar sein, dass diese nicht notwendigerweise einen Kausal-Zusammenhang darstellen (also “hohe Bouncerate führt immer zur Panda-Abstrafung”), das verdeutlicht folgender Plot:
Hier sind links die Bouncerates der Panda-Gewinner und rechts die Bouncerates der Panda-Verlierer geplottet. Man sieht, dass es auch Panda-Verlierer gibt, die eine niedrigere Bouncerate als so mancher Panda-Gewinner haben. Im Mittel aber sieht man, dass Panda-Verlierer schon eine deutlich größere Bouncerate haben. Das erscheint sinnvoll, wird doch gemeinhin vermutet das für eine Panda-Abstrafung viele Indikatoren zusammen kommen müssen und nicht eine einzige Größe als Maß dient.
Traffic-Quellen als weiterer Panda-Faktor
Als eine der Ranking-Faktoren für 2011 scheint sich ja Brand Queries heraus zu kristalisieren. Uns hat nun interessiert, ob auch dies ein Indikator für Panda-Abstrafungen sein könnte. Weiterhin verfügt Alexa über einige interessante Zahlen, welche die Trafficsources einer Seite betreffen. Dies sind zum einen der Anteil von Search am Gesamttraffic der Domain, zum anderen die vor und nach der Domain besuchten Seiten in der Surf-Session (Clickstream). Ob ein User nach dem Besuch des Angebots eine weitere Suche starten muss, hört sich auf jeden Fall logisch als Qualitätskriterium an. In den meisten Fällen würde dies bedeuteten, das Angebot war von geringer Qualität, sodass der User erneut zu Google zurück kehren muss.
Siehe da, Google als nachfolgend besuchte Seite (Downstream Google) weist eine starke Korrelation mit Panda-Abstürzen auf. Bemerkenswert ist, das viel Traffic von Google zu erhalten (Upstream Google) eine sogar noch höhere Korrelation besitzt. Eventuell ist Google der Ansicht, Seiten die vom Nutzer nicht von selbst angesteuert werden, sind von niedrigerer Qualität. Das der Anteil von Suchtraffic am Gesamttraffic eine niedrigere Korrelation als Suchtraffic von Google besitzt, lässt sich dadurch erklären, dass Google deutlich einfacherer den eigenen Search-Traffic als den aller Searchengines messen kann.
Um oben angesprochene Vermutung, dass Brand Queries ein Trust-Signal ist, zu bestätigen, haben wir uns das Monthly Search Volume auf den Domain-Namen via AdWords-Tool ausgeben lassen. Diesen haben wir wiederum in Korrelation zum Panda-Abscheiden gesetzt. Voila, es hat deutlichen Einfluss, sogar noch mehr als alle Onpage-Faktoren.
Zusammen genommen erkennt man das viele Faktoren zusammen kommen müssen, um eine Panda-Abstraffung zu erleiden. Als Gegenmassnahme ist sicher als erstes eine Bouncerate-Optimierung zu empfehlen, was durchaus mit einfachen Massnahmen realisierbar ist (passende Visuals wie Markenlogos haben bei uns viel geholfen). Eine Reduzierung des Suchtraffic-Anteils kann vermutlich durch einen Shift im Online-Marketing erreichen, beispielsweise in dem man günstigen Display-Traffic einkauft.
Eine Steigerung der Time-on-Site ohne graue Tricks hingegen dürfte schwerer sein. Wenn es einem allerdings gelingt ein cooles “Produkt” zu schaffen, dann muss man sich um Seitenverweildauer sowie Brand-Traffic keine Sorgen mehr machen. Natürlich ist es immer leichter gesagt, als getan ein cooles Produkt für Nutzer zu bauen. Wir jedenfalls haben werkeln seit längeren an etwas in diese Richtung, in zwei Wochen ist es soweit, wir freuen uns darauf!
Hoffentlich hat euch dieser erste Post gefallen! In Zukunft werden wir weitere interne Analysen wie diese hier posten und freuen uns über eine angeregte Diskussion. Also am Besten RSS-Feed abonieren oder auf Twitter / Google+ folgen.
Vielen Dank an Max und Nico für die Mithilfe und Max für das Logo!
Don't worry we didn't heart all the products ourselves, that have been our much more fashion-knowledgable users.
Everyone at STYLIGHT loves numbers. But we, the engineers are addicted to them. We need to measure everything and then try to improve on it. Since not all of us are fashionistas, it gives us a way in understanding fashion ;)
We are a small team that's proud to produce a great product, with attention to getting also the small things right. Here we want to share our experiences and solutions