Anyone who knows me, knows I'm a pretty big fan of AWS. I come out in defense of them more often then not, and my twitter feed is always buzzing with how much better they (typically) are then other wanna-be cloud providers. I tend to love any new service they come out with, and I try just about everything they make available to me.
I don't usually rant about AWS. I use them for my everyday life. They've built an Amazing amount of services. My favorite quote (and many of your favorites as well):
The "New" CloudSearchNot too long ago, Amazon release a new version of CloudSearch, with a lot of anticipated features, some of which are very enticing, such as:
- Geographic Search (Lat-Lon)
- Search Highlighting
Unfortunately, there are some services that are implemented very poorly, and so much of it is a backward step that I have to wonder if this was some sort of early April fools joke. This "new" cloud search feels more like a pre-beta version, taking several leaps in the opposite direction of progress.
Hey Amazon, this is a joke right?
A long awaited step was to support multiple languages, however, they also removed your ability to specify the language of the document. Instead they suggest what you really wanted was to specify the language of each field. What..
To upload your data to a 2013-01-01 domain, you need to:
langattributes from your document batches. You can use
cs-import-documentsto convert 2011-02-01 SDF batches to the 2013-01-01 format.
Another very important and backwards step is the ability to upload documents with more fields then you need indexed initially. This was incredibly useful because your backend can simply dump out all the objects to your SDF and upload them to the domain, then in the future if you wanted to add a new field, you don't need to re-upload all of your documents. This has also been removed:
Make sure all of the document fields correspond to index fields configured for your domain. Unrecognized fields are no longer ignored, they will generate an error.
What's worse, they added support for indexing from DynamoDB, but if you don't put every single field directly in your domain, you have to hand edit the SDFs or put everything into your CloudSearch domain:
The DynamoDB uploads are also not a pipeline, it only helps with the initial upload.
The need for specifying the exact document format you're going to upload is really very intense too. Before any field could be multi-valued, and now you have to very specifically tell it if a field is multi-valued or not, and if you want to use something like a "suggester", it only works with single-valued fields.
You also can't make a single field that maps in multiple source-fields, unless each of those source fields are included in the index itself. There is no more merging in multiple source fields into a single field to save on space.
Not just a new API, a whole new (incompatible) system
Perhaps worst of all, the new version of CloudSearch is entirely incompatible with the old version. This means that if you want to try out any of the new features, you basically have to start all over and re-design all of your systems, as well as re-create and re-upload your existing indexed data. Amazon provides no automated tools to do so either, you're pretty much on your own.
If you are an existing user of CloudSearch, you won't want to switch to this new system. It's not nearly as advanced as your existing implementation. You'll be missing quite a bit of functionality. If you're just starting out, you might not notice, and you'll probably be happy with some of the different (not new) features they're providing, such as pre-scaling and multi-AZ support.
Hopefully this is not indicative of the new way that CloudSearch, and Amazon in general, is moving to. This is the first time they've released a new product that has completely frustrated me, to the point of wondering what they're thinking. This is not the path forward, this is a completely re-work of an existing system for some specific use case, not a general need.