2014-01-25
digital
tech jekyll

Though my Jekyll setup runs perfectly smooth, I wanted to have some kind of control, if (all) pages are re-generated when I make changes to the CSS or other minor things that aren't immediately visible.

So I wanted to have a timestamp of the page generation. To have re-usable code, I was looking for a custom Liquid tag for that purpose and found a gist by blakesmith: render_time.rb.

Since I wanted the timestamp only as a comment in the HTML pages, I made a small change. Besides that, it's the same - so thanks a lot blakesmith!

Here's render_time.rb:

module Jekyll
  class RenderTimeTag < Liquid::Tag

    def initialize(tag_name, text, tokens)
      super
      @text = text
    end

    def render(context)
      "<!-- #{@text} #{Time.now} -->"
    end
  end
end

Liquid::Template.register_tag('render_time', Jekyll::RenderTimeTag)

Just drop it in your _plugins folder and add the line (enclosed in Liquid open and close tags; which I can't do here, because the Liquid tag would then be executed... :-/)

render_time Page generated at:

to e.g. your default.html.

2014-01-19
digital
tech jekyll

I write blog post now for more than five years. One thing that I observe - like any other person on the net who blogs - is that I blog less often than I want to.

Why is that? I assume it's when the 'process' is cumbersome, you'll blog less. Especially when it comes to write short posts about this and that. (Amateur) blogging should be easy and not a multi-step login-create-publish-admin job.

I tried different engines in the past; from hand-made HTML, via Wordpress to secondcrack lately. Now I reached the next stage: Jekyll.

Jekyll brings everything I want:

  • understands Markdown files
    Markdown makes WYSYWIG editors obsolete. All you need is a simple text editor and you can write blog posts. In my setup it's the really great MarkdownPro.

  • generates static pages
    In the not too distant past, dynamic web sites where state of the art. Today we know, that (at least) for blogs you don't need all the stuff that PHP gives you. And for the rest, there is still JavaScript. Static web sites do their job without putting any pressure on the system. You don't have to fear memory limits, CPU load or the like.

  • processes text files
    I don't want to use a browser to write a blog post. I want to write a text file and save it to a synced directory. Since I run an owncloud instance for various other stuff, it was really easy to create a new directory, let owncloud do the syncing and tell Jekyll to take this as its source directory. BTW: images etc. are handled exactly the same way. Can't be more simple.

  • automatically updates the site on changes
    When I'm finished with writing a post, I want it to be live within seconds. I don't want to push a repository. I don't want to login somewhere just to hit the publish button. Too many steps. Jekyll has a --watch command which is all you need, to have it constantly watch for changes of anything in the source directory and it'll start to (re-)generate your site.

  • provides a simple template system
    If you want a unique layout for your Wordpress blog, you'll face a steep learning curve. It's no surprise, that there's a whole industry selling professional templates for WP. Even with secondcrack - which is definitely more simple than the most other systems - I found myself investing way too much into developing scripts. Jekyll bases it's output on Liquid tags and though I didn't know anything about this only some days ago, I quickly felt comfortable and produced results in little time.

Jekyll is so simple and quick. Setup is done literally in minutes. Just type gem install jekyll into your terminal and you're basically done. No database setup, no user management. Now make one or two changes in the _config.yml file, point your web-servers document root to the output directory (or vice versa), save a markdown file in the _posts directory and you're live.

If my theory works, you'll see some more posts on this site than in the past. So stay tuned.

2014-01-14
digital
google privacy

Google buys thermostat maker Nest for $3.2 billion in cash.

In a Q&A section of the announcement post on Nest's company blog, the new member of the Google family tries to play down the privacy concerns of it's users that fear, that Google will know a lot of additional things about them in the near future (emphasis mine):

Will Nest customer data be shared with Google?
Our privacy policy clearly limits the use of customer information to providing and improving Nest’s products and services. We’ve always taken privacy seriously and this will not change.

So far, so good. But what would be, if the services integrate a Google product? Nest has an answer to that also:

Will Nest and Google products work with each other?
Nest’s product line obviously caught the attention of Google and I’m betting that there’s a lot of cool stuff we could do together, but nothing to share today.

In plain text: Yes, we will integrate with Google services and yes, then your data will be a part of the galactic Google database.

Remember: Google is an advertising company and makes a living from knowing as much as possible about you. So, dear Nest user, be prepared to get some advices from AdSense regarding your room temperature, or some hints about a better timing for your drive to work.

2013-06-07
digital
tech apple

This years developer conference to be held by Apple beginning June, 10th is teased with a headline:

Where a whole new world is developing

Apple wouldn't be Apple, if there wasn't a deeper meaning in this headline. So I started to think about, what this years headline could hint us to.

Everybody talks about the known things, previews of iOS 7 and MacOS 10.9 the successor of Mountain Lion, some (minor) hardware refreshes etc. And everyone seems to be sure, that we shouldn't hold our breath hoping for a new iPhone or iPad. What do these things have to do with the 2013 claim? Nothing.

Does the 'new world' mean to focus on the invasion of the iPhone into China or India, which are countries being obviously late to the iPhone party? No - we saw a China-focus in last years iOS 6. Dedicate the most import event of the Apple calendar to welcome the huge market of India? Surely not.

What actually wonders me, is that noone talks about the iWatch or Apple TV (aka iTV) in context with WWDC - o.k. iWatch seems too far away, but what about the "new world" that's "developing" simply a new world of iOS devices, that weren't opened for the developers until now: the Apple TV?

Therefore my money would be on the introduction of the APIs for bringing all the great content and apps to the big screen via an App Store for the Apple TV.

We'll see.

2013-03-24
digital
howto dev jenkins

Update: The pluginis now available in the official Jenkins plugin repository! Details here: https://wiki.jenkins-ci.org/display/JENKINS/JiraTestResultReporter-plugin


Testing your code with unit tests is a fine thing and using a Jenkins CI server for those tests is even better. Automatically creating issues in Jira for failed tests makes the workflow complete. This is what the JiraTestResultReporter plugin for Jenkins does.

This plugin examines the build job for failed unit tests. It work by using the Jenkins internal test result management for detecting failed tests. Just let Jenkins run and report your unit tests e.g. by adding the "Publish xUnit test results report" to your build job.

If JiraTestResultReporter detects new failed tests, it will create an issue for every test case in Jira:

Installation

As long as my hosting request to get the plugin included in the official plugin repository of Jenkins CI is pending, you'll have to either build the plugin yourself or you can download the recent snapshot:

  • Build yourself
    • Download or clone the source code from GitHub
    • cd into the downloaded directory
    • execute the maven command mvn package

or

  • Download the plugin package from here.

then

  • Upload the built or downloaded file JiraTestResultReporter.hpi to the plugins directory of your Jenkins installation or use the plugin uploader from Manage Jenkins -> Manage Plugins -> "Advanced" tab
  • restart Jenkins

Usage

  • In the build job add JiraTestResultReporter as a post-build action.
  • Configure the plugin for this job. See the help boxes for details. I have the dedicated Jira user 'jenkins_reporter' for these kinds of automatic reports.

  • Build your job. If there are failed tests, the plugin will create issues for them. This will (should!) happen only once for every new failed tests; new in this case means tests that have an age of exactly 1.

OCLint is a static code analyzer for C, C++ and Objective-C. You'll find it here on GitHub.

Today the maintainer merged my pull request in which I made an additional reporter module which writes a PMD-style file.

Having that, you can let the PMD Analysis run by Jenkins reports about your coding sins.

Here's how to set the whole thing up:

1. Setup OCLint
2. Build file for Jenkins

The invocation of OCLint is configured in a build.xml. I'll walk you through this one.

<?xml version="1.0" encoding="UTF-8"?>
<project name="fooProject" default="build-fooProject">

<property environment="env"/>

<target name="build-fooProject" depends="prepare,oclint" />

<target name="clean" description="Cleanup build artifacts">
    <delete dir="${basedir}/build/oclint" /> 
</target>

<target name="prepare" depends="clean" description="Prepare for build">
    <mkdir dir="${basedir}/build/oclint" /> 
</target>

Standard stuff so far. Setup the project and prepare the directories.

<target name="oclint">
    <antcall target="xcodebuild-clean" />
    <antcall target="xcodebuild" />
    <antcall target="oclint-xcodebuild" />
    <antcall target="oclint-parse" />
</target>

Our oclint invocation has four steps.

<target name="xcodebuild-clean">
    <exec executable="xcodebuild">
        <arg value="-configuration" />
        <arg value="Release" />
        <arg value="clean" />
    </exec>
</target>

This ensures, that we have a clean build.

<target name="xcodebuild">
    <exec executable="xcodebuild" output="xcodebuild.log">
        <arg value="-configuration" />
        <arg value="Release" />
        <arg value="-arch" />
        <arg value="armv7" />
        <arg value="CODE_SIGN_IDENTITY=" />
        <arg value="CODE_SIGNING_REQUIRED=NO" />
    </exec>
</target>

Now we build our project. The important part is output="xcodebuild.log"; this will write the output to a file which will be fed to a helper script in the next step.

<target name="oclint-xcodebuild">
    <exec executable="PATH_TO_oclint-release/bin/oclint-xcodebuild" />
</target>

oclint-xcodebuild reads the xcodebuild.log and produces the file compile_commands.json. This file holds all the compiler stuff and is the input format for oclint.

<target name="oclint-parse">
    <exec executable="PATH_TO_oclint-release/bin/oclint-json-compilation-database">
        <env key="PATH" value="${env.PATH}:PATH_TO_oclint-release/bin/"/>
        <arg value="--" />
        <arg value="-o=${basedir}/build/oclint/lint.xml" />
        <arg value="-report-type=pmd" />
        <arg value="-stats" />
    </exec>
</target>
</project>

Finally, this is where the magic happens. oclint-json-compilation-database feeds the compile_commands.json file to oclint. The -report-type=pmd flag tells it to use the PMDReporter, which will write its findings to a file called lint.xml.

Be sure to consult the documentation for OCLint and its helpers for the various arguments you can provide.

I created a gist with the whole file here.

3. Configure the job in Jenkins
  • Go to the configuration page of your job in Jenkins.
  • Add a build-step with the build.xml
  • Add a post-build action "Publish PMD analysis results" and enter the path to the xml file we produced. In this example it would be build/oclint/lint.xml
4. Build the job

If everything worked, you should have a new section "PMD Warnings" in your build information and after a few builds the trend chart will be produced.

2013-03-08
digital
howto dev

Recently I wondered why a specific method gets called way more often than I would expect. I wanted to find out, which other methods call this.

The usual approach…

… would be to set a breakpoint at the beginning of the method and look at the stack trace of Xcode's 'Debug Session' pane for the calling methods. Surely a tedious way. You'll have to write down the caller and the context that led to the caller.

A better way…

… is to utilize the breakpoint capabilities of Xcode or - more specifically - of LLDB, the new debugger which comes with the new compiler CLANG. In this approach, you'll also set a breakpoint at the beginning of the called method. Then select the breakpoint label and ctrl-click to get a small pop-up which let's you define the behavior of the breakpoint:

Three simple things to do:

  • Select 'Debugger Command' from the 'Action' pop-up

  • Enter 'bt 10' into the text field

  • Check 'Automatically continue after evaluating'

'bt' is a command that LLDB understands. It's the short for 'backtrace'. The following number defines the number of steps the trace will have. In this case 'bt 10' instructs the debugger to print the last 10 method calls before it hit the breakpoint. Exactly what we need. The check we set in the third step has the simple effect, that the program keeps running, which is nice if you're testing e.g. the GUI part of an application.

After you ran the program (don't forget to enable breakpoints) you'll find the logged traces in the debugger console:

* thread #1: fooApp -[WMSCollectionViewController collectionView:didSelectItemAtIndexPath:] at WMSCollectionViewController+CollectionViewDelegate.m:41, stop reason = breakpoint 2.1

frame #0: fooApp -[WMSCollectionViewController collectionView:didSelectItemAtIndexPath:] at WMSCollectionViewController+CollectionViewDelegate.m:41

#1: fooApp -[WMSCollectionViewController updateCollectionView] at WMSCollectionViewController.m:158

#2: fooApp -[WMSCollectionViewController reloadDocuments:] at WMSCollectionViewController.m:136

#3: Foundation __57-[NSNotificationCenter addObserver:selector:name:object:]_block_invoke_0

#4: CoreFoundation ___CFXNotificationPost_block_invoke_0

#5: CoreFoundation _CFXNotificationPost

#6: Foundation -[NSNotificationCenter postNotificationName:object:userInfo:]

#7: fooApp -[WMSAppDelegate checkDocumentsOpenState] at WMSAppDelegate+DocumentSetup.m:247

#8: fooApp -[WMSAppDelegate observeValueForKeyPath:ofObject:change:context:] at WMSAppDelegate+DocumentSetup.m:232

#9: Foundation NSKeyValueNotifyObserver

(I deleted some information for this purpose; so try it yourself - you'll find a lot of useful things in there!)

So what can we learn from the output?

  • The first line marks the method that triggered the trace (the one, we set our breakpoint) and describes the thread which is useful if you have to wade through multi-threaded logic of your application.

  • As this is a trace from the viewpoint of the called method where we set the breakpoint, the frames are in reverse order. They are prefixed with frame.

  • Next is the framework or library the called method belongs to. In the example, there are method call from fooApp which is the example app itself or the frameworks (UIKit, Fondation, CoreFoundation etc.)

  • So, now we know, that initially a KVO (Key-Value-Observing) led to the call of our examined method. KVO triggered a Notification which called reloadDocuments:. In reloadDocuments: the method updateCollectionView got called and finally from that one we landed at our investigation method.

  • As a further convenience the file and line number from which a method are called is also printed.

Cool, eh?

2012-09-05
digital
apple

AppleInsider about an interesting approach:

Apple's new Passbook feature in iOS 6 isn't just a coupon app; it's a framework that enables retailers to develop smart apps for transactions, without relying on new Near Field Communications (NFC) hardware to do so.

2012-02-26
digital
howto dev

HeaderDoc is a very versatile document generation system by Apple, which can handle a wide variety of languages. The latter was the reason for me to give it a chance, since other generator either work only with a specific language or produce (for me) unusable results. HeaderDoc works fine with PHP, JavaScript etc. which makes it a perfect tool for web projects - otherwise you'll have to deal with different tools, which can't produce combined docs.

HeaderDocs comes with Mac OS X (at least when you have Xcode installed). But I wanted to use it on my Ubuntu server where Jenkins does all the integration stuff.

Apple has open-sourced HeaderDoc and you can find it on Apple's open source website at http://www.opensource.apple.com/

Downlaod & unpack

  • The most recent version is in the Mac OS X 10.7.3 tree; copy this link and download the archive into a directory on your Linux machine: http://www.opensource.apple.com/tarballs/headerdoc/headerdoc-8.8.38.tar.gz

    • wget http://www.opensource.apple.com/tarballs/headerdoc/headerdoc-8.8.38.tar.gz
  • unpack with tar -xvzf headerdoc-8.8.38.tar.gz and cd into the created directory

Requirements

HeaderDocs is basically a PERL script, so you'll need a recent PERL installation. This should be the case on every 'normal' system, so I won't dive into installing PERL here. Besides that, HeaderDoc needs some other libraries. If they're not installed, run the following commands:

  • FreezeThaw

    • sudo apt-get install libfreezethaw-perl
  • libxml2-dev

    • sudo apt-get install libxml2-dev
  • xmllint (from the libxml2-utils)

    • sudo apt-get install libxml2-utils
  • checkinstall (not necessary for this installation, but you should alway use checkinstall, when manually installing software, which circumvents the Ubuntu/Debian package system!)

    • sudo apt-get install checkinstall

Build & install

  • make clean
  • make - This will actually compile the software & libraries and performs a lot of tests. Three of them ('class 3', 'header 5', 'template 1') failed during my install, but I didn't notice any false behaviour using HeaderDoc.
  • sudo checkinstall make realinstall

Ready. You should have now two files in /usr/bin/:

  • headerdoc2html - the processor itself
  • gatherheaderdoc - a utility to combine the docs in an overview

You'll find the documentaion for HeaderDoc on Apple's developer site

2012-02-12
digital
os_x

When you see in your console log messages like

11.02.12 17:36:41,009 webdavfs_agent: network_mount: WebDAV protocol not supported; file: /SourceCache/webdavfs/webdavfs-322/   mount.tproj/webdav_network.c; line: 3131

this doesn't necessarily mean, that Mac OS X suddenly stopped understanding the WebDAV server it could mount minutes ago. In fact it's more likely, that the problem lies on the server side and the DAV config is incomplete.

Since the Mac OS web DAV client isn't really chatty about what's going (wr)on(g) here, you should use a unix WebDAV client (e.g. cadaver) for debugging.

The client can properly connect to the DAV server which you can see by the result code ("200") in the apache log file:

"OPTIONS / HTTP/1.1" 200 250 "-" "cadaver/0.23.3 neon/0.29.0"

But the client tells you, that there's something wrong anyhow, which you can verify again in the apache log:

"PROPFIND / HTTP/1.1" 405 508 "-" "cadaver/0.23.3 neon/0.29.0"

The result code "405" means "Method not allowed". Does the client try to talk to the DAV server via incorrect command and Mac OS therefore reports the "not supported" message? No, it's just that the DAV server doesn't know how to handle the DAV protocol properly.

So go and check your server config. When I upgraded Plesk from version 10.2 to 10.4, the upgrade process didn't catch all the relevant vhost.conf files and therefore left me with a half configured DAV server. The problem is, that Plesk 10.4 uses a different directory layout under /var/www/vhosts/ and I had to manually copy and configure the vhost.conf files to their new places.

2012-01-29
digital
tech

That the integrated OCR engine, that ships with DevonThink Pro Office is only the single-core version is known - it's due to licensing issues and DevonThink would have to charge a lot more if they made available the multi-core version.

I wanted to know, if the Abby FineReader which ships with the Fujitsu ScanSnap S1300M utilizes the multi-core capability. So I ran some tests.

Though I dont't think, it uses multiple cores, I found the ScanSnap to be 4x faster.

For a 2,5 pages document, the 'normal' workflow (use the built-in OCR of DT) takes a lot more time compared to a slightly different workflow, where the OCR is done by the ScanSnap software. The actual result for the OCR job where 48 seconds vs. 12 seconds.

The caveat is, that this approach doesn't have the queue feature, that DT has, so that it'll have to finish the OCR process before you can feed it the next document. But since the OCR is 4 times faster, this is a minor one, since it will increase your overall throughput.

If you want to follow this, just check "convert to searchable PDF" in the in the profile settings in ScanSnap. Then uncheck the same in the DevonThink Pro Office preferences.

2011-10-22
digital
howto os_x cacti

After upgrading my cacti server to Mac OS X Lion, the graphs in my Cacti installation needed a very long time to render and the CPU usage of rrdtool spiked – somehow blocking the whole machine.

The rrdtool which is needed by cacti has been installed via Fink, which is a fine project maintaining all the usual *nix tools, that don’t come with Mac OS X. Fink gives you the convenience of a Debian package manager and makes installing additional software a breeze.

Because I didn’t know exactly which program was responsible for the slow rendering, I dumped the whole fink tree which is normally located at /sw and started over with a fresh install. After fink completed all the compiling an installation I had a fast cacti again. Since I didn’t update the fink packages regulary, I was sure, that some bugfixes in the newer versions resolved the slowness problem.

After the recent update of Lion to 10.7.2, I had to realize, that the rendering was totally slow again. So I thought, that an update of rrdtool and the other needed tools should do the trick again. Unfortunately there was no newer version of the tools as those which where already installed.

After trying some things (which didn’t help), I looked for a way to circumvent the dump-everything-and-spend-another-two-hours-reinstalling thing. Since there was no newer version for 10.7.2, there must be a different cause.

I figured out, that one of the following tools had to be the offending one:

  • librrd4-shlibs
  • rrdtool
  • pango1-xft2-ft219 (dev, shlibs & the main one)
  • freetype219 (main & shlibs)
  • fontconfig2

So find their package name and remove them (example for ‘freetype’):

When you delete or remove a package via fink and then reinstall it again, fink makes use of the binary packages it produced during the former installation. (I tried the command-line switches to avoid this behaviour but had no luck.)

What helped, was to delete the packages from the cache directories (on my system this is at /sw/fink/10.7/stable/main/binary-darwin-x86_64/) and starting the fink install command again.

I think it’s safe to delete them all:

fink grabs the source and – since the binary package is no longer available – compiles everything from scratch.

Install the packages. It’s best to start with the main package (the one you were initially interested in), because the package management of fink will install additional ones automatically when needed:

After that, the rrdtool graph rendering was fast again. So my conclusion is, that even during minor updates of Mac OS X, underlying things can change that way, that programs don’t work as expected, if they were compiled on a different version of the system.

So, if your tools behave different after an update or upgrade, try a fresh compile of them first.

2011-05-01
analog
video fun

Herrlich. Hat Paul McCartney früher Auto-Sendungen moderiert?

(Via FAILBlog)

2011-04-28
digital
biz apple

Das ansonsten recht informative “Chart of the Day” von businessinsider.com beschäftigte sich jüngst mit einem eher merkwürdigen Vergleich und stellt darin die Anzahl verkaufter Windows 7 Lizenzen den “iOS units” gegenüber, die im entsprechenden Zeitraum verkauft wurden:

Wobei mir nicht mal klar ist, was “iOS units” überhaupt sein sollen, da laut Legende Macs mitgezählt wurden, diese jedoch definitiv nicht mit iOS laufen…

Diese kleine Unschärfe mal beiseite gelassen, bleibt die Frage, was der Vergleich überhaupt soll. Die wesentliche (einzige) “Ableitung”, die getroffen wird, lautet:

For all of its mind-blowing success, sales of Apple’s computing products are still just a fraction of Microsoft’s Windows 7 licenses.

Und? Hier wird doch nicht etwa der Verkauf einer neuen Software für eine installierte Basis von über 1 Milliarde PCs mit Verkäufen von iPhones, iPads und iPod touch Geräten verglichen? Mit einer Plattform, die vor drei Jahren bei Null gestartet ist? Kann ich mir nicht vorstellen – das wäre ja so, als würde man verkaufte Liter Heizöl mit der Anzahl errichteter Windkraftanlagen vergleichen. Würde ja niemand machen. Genauso wenig, wie niemand die Frage stellen würde, warum nach 1,5 Jahren gerade mal ein Drittel der Nutzer auf die angeblich so ersehnte neue Version gewechselt ist. Zur Erinnerung: lässt man das komplett gescheiterte Vista mal ausser Acht, wurde das letzte (halbwegs) brauchbare Microsoft System – Windows XP – vor gut zehn Jahren (!) auf den Markt gebracht. Da braucht es schon schräge Vergleiche, um das als Erfolg werten zu können…

Kopfschüttel…

2011-04-26
digital
tech privacy

Es ist ein Drama. Vor sechs Tagen wurde in die Server ‘eingebrochen’, die das Playstation-Network (PSN) hosten. Über das PSN werden Updates und Spiele-Demos verteilt, Spielstände verwaltet und Nachrichten ausgetauscht. Ausserdem beherbergt das PSN den Playstation-Store, in dem man Spiele, Musik und Filme kaufen kann. Seitdem ist das PSN offline und Sony geht davon aus, dass “in einer Woche” (!) der Betrieb wieder aufgenommen werden kann. Heute meldete sich Sony mit einem Update.

Valued PlayStation Network/Qriocity Customer: We have discovered that between April 17 and April 19, 2011, certain PlayStation Network and Qriocity service user account information was compromised in connection with an illegal and unauthorized intrusion into our network.

Eine Woche nachdem die Sony-Experten den Einbruch festgestellt haben, informieren sie nun die über 75 Millionen Nutzer des PSN, dass manglaube, dass sämtliche Daten geklaut wurden:

[…] we believe that an unauthorized person has obtained the following information that you provided: name, address (city, state, zip), country, email address, birthdate, PlayStation Network/Qriocity password and login, and handle/PSN online ID. It is also possible that your profile data, including purchase history and billing address (city, state, zip), and your PlayStation Network/Qriocity password security answers […]

Nahezu niedlich ist die Umschreibung unauthorized person – klingt hier irgendwie nach Strauchdieb oder Tunichtgut, der eine Cola-Dose in die falsche Recyclingtonne geworfen hat. Es geht aber um den Verlust von Passwörtern, Online-Identitäten, Mail-Adressen, Kauf-Historien… Damit nicht genug; weiter unten kommt dann noch eine nicht unwesentliche Ergänzung:

If you have provided your credit card data through PlayStation Network or Qriocity, out of an abundance of caution we are advising you that your credit card number (excluding security code) and expiration date may have been obtained.

Das ist der Super-GAU. Einer der größten Diebstähle überhaupt – es geht um Millionen, wahrscheinlich zig Millionen Kreditkartendaten. Katastrophal ist, dass Sony erst eine Woche nach der Attacke mit dieser Information rausrückt und damit selbst zu verantworten hat, wenn diese Daten nun missbraucht werden.

Sony takes information protection very seriously and [blafasel]

Eine Aussage, die PR-mäßig natürlich nicht fehlen darf, aber angesichts der Tragweite und des inkompetenten Verhaltens von Sony nichts als blanker Hohn.

Allein die Tatsache, dass Sony schon am Tag nach dem Vorfall verkündet hat, das bisherige System komplett stillzulegen und durch ein neues zu ersetzen, spricht Bände. Offensichtlich war den Betreibern bewusst, dass das PSN nicht sicher war:

[…] strengthen our network infrastructure by re-building our system to provide you with greater protection of your personal information […]

Damit dürfte ‘leichte Fahrlässigkeit’ aus’m Spiel sein; bin gespannt, mit welchen Klagen Sony jetzt überzogen wird. Bleibt zu hoffen, dass Sony die ganze Angelegenheit genau so ernst nimmt, wie sie noch vor wenigen Wochen alles unternommen haben, um George Hotz (aka GeoHot) zur Strecke zu bringen. Da hatte der Elektronik-Gigant gerichtlich die Herausgabe der IP-Adressen von Website-Besuchern und Youtube-Nutzern erwirkt.

Eines macht der Fall zudem noch mal klar: Kundendaten sind ein sehr wertvolles Gut – Firmen ermöglichen sie ‘nur’ die Geschäfte, aber für die Kunden sind sie Teil der Existenz, die sie vertrauensvoll in fremde Hände legen. Dieses Vertrauen nicht zu enttäuschen ist sicherlich eine der obersten Aufgaben. Ob es nun der wichtige OS-Patch, das geeignete Passwort oder die nochmal geprüfte Verarbeitung von Formular-Daten ist – täglich und überall gilt es, Kundendaten zu schützen.

2010-08-04
digital
google

Google reporting on the blog:

… Wave has not seen the user adoption we would have liked. We don’t plan to continue developing Wave as a standalone product…

Das kommt jetzt eher überraschend, aber ich glaube so richtig verstanden hat das Konzept selbst bei Google niemand…

2010-08-03
digital
tech

Cool! Marco Arment hat mir gerade geschrieben, dass Instapaper “in the relatively near future” Support für delicious bekommen wird.

2010-07-26
digital
apple

MacRumors reports:

iLife ’11 Coming in August with a New Mystery Application

* … * Improving the integration of social networks * New application (mystery!) * Disappearance of iDVD * …

2010-07-22
analog

Siehe: Spiegel Online

M.E. ein Fake. Wenn drei Meter neben mir ein 40 Tonnen Wal angeflogen kommt, dann gucke ich doch nicht teilnahmslos geradeaus. Der Fotograf fährt nicht hin (Abstand zum “Opfer” verändert sich kaum) und wenn er schon nicht hilft, macht er trotzdem keine weiteren Fotos? Quatsch.

Update 19.1.2014: Der Artikel bei Spiegel Online zeigt die o.g. Bilder nicht mehr. Interessant...

2010-07-04
analog

Die Zeitung “Blick” (Schweiz) zum Spiel Argentinien – Deutschland:

Die clevere Löw-Truppe gewinnt diskussionslos gegen Argentinien mit 4:0 und zieht überlegen in das WM-Halbfinale ein.

“clever”, “tempo-reich”, “druckvoll” – trifft alles zu, aber “diskussionslos” gefällt mir am besten und beschreibt das gesamte Auftreten perfekt in einem Wort.

2010-07-01
digital
apple

Macrumors citing a report from BGR:

Boy Genius Report claims to have received information from an Apple source noting that the company is finally gearing up to launch its cloud-based iTunes initiative, a program that will also included wireless syncing for devices.

Wäre schön und überfällig; ich fürchte jedoch, dass eine solche Funktionalität so sehr in’s Innere von iPod, iPad, iPhone etc. eingreift, dass dies eine neue Version von iOS erfordert. Da iOS 4 gerade mal drei Wochen jung ist, halte ich es für unwahrscheinlich, dass so kurzfristig ein weiteres Update kommt; schon gar keines, das solch grundlegende Änderungen mit sich bringt.

Damit bleibt realistisch nur der Herbst, wenn mit Version iOS 4.1 die Betriebssysteme von iPhone und iPad zusammengeführt werden. Das würde dann auch einen Versionssprung in der ersten Nachkommastelle rechtfertigen, da der Anspruch an ein .x Release schon auf zusätzliche Features hinweist, als die reine Funktionsangleichung zwischen iPhone und iPad – zumal sich beim iPhone ja sonst gar nichts entscheidendes ändern würde, weil es ja bereits auf iOS 4 ist…

Let’s see.

2010-06-21
analog

sueddeutsche.de:

Eine bislang geheime Papst-Akte belastet Walter Mixa schwer: Enge Mitarbeiter und Bekannte des ehemaligen Augsburger Bischofs berichten von Alkohol- und Wahrnehmungsproblemen…

Kein Ende in Sicht. Wann kommt endlich der Staatsanwalt?

2010-06-20
analog

Die Hinweise, dass das Wetter ‘nicht schlecht’ und eigentlich ‘normal’ ist, gehören mittlerweile zu jeder Wettervorhersage, wie die A7 zu den Verkehrsmeldungen.

Dankenswerterweise Natürlich – weil von unseren Steuern bezahlt – stellt der Deutsche Wetterdienst seine Aufzeichnungen online der Öffentlichkeit zur Verfügung.

Ich habe die Mai-Werte von 1991 bis 2010 verglichen:

Fazit: alle Werte dieses Jahres liegen unter den Vergleichswerten der letzten 20 Jahre! Die mittlere Temperatur liegt 4,6°C und fast 7°C unter den Werten des letzten bzw. vorletzten Jahres. 25% Abweichung ist also ‘noch im Rahmen’ und ‘normal’??? Klima-Mafia…

2010-06-16
analog

Link: http://bit.ly/dqqv9R

Der Realitätsverlust und das Unschuldsbewusstsein sind nicht zu fassen:

Der frühere Augsburger Bischof Walter Mixa kann offenbar keinen Frieden mit seiner Entscheidung machen. Er spricht jetzt von einem erzwungenen Rücktritt, den er bereits kurz danach widerrufen habe. Mixa erwägt einen Gang vor den päpstlichen Gerichtshof. (Via BR-Online)

Hatte mich schon hier zu den Vorgängen ausgelassen.

2010-06-10
analog
fun

… kommen nicht mal an den Ball. Das wird doch nix.

(via nbcnews.com)