Thursday, December 30, 2010

Integrating Google Contacts with Your Enterprise App for better UX

Integrating Google Contacts with your enterprise application can be more challenging than you'd think but it's critical to a better user experience.

To keep a consistent level of data quality across your contact, consider selecting an application like your CRM (e.g. Salesforce) as your "system of record" and create a one way integration to Google Contacts.

Google Contacts doesn't have the notion of a common enterprise level object which can be uniformly extended by adding custom fields. It also doesn't support data validation rules to enforce consistent data quality across the enterprise. Custom fields are supported but through "free form" key/value pairs called Extended Properties. It's up to you to maintain consistency and data quality.

  • When creating a contact through the API, you can add any number of Extended Properties but make sure that Keys are consistently named.
  • To Make the UX better, provide a Google Gadget that will allow users to directly update or create (enterprise) contacts. The Gadget will update the enterprise app's contact which in turn will trigger the sync with Google Contacts. You can place small Gadgets in the main Mail page.
  • Google's premium editions provides a Shared Contact feature which is equivalent to a Global Address Book. It's much easier to sync all enterprise data with this list vs. trying to sync with every user's Contact list.

Monday, December 20, 2010

Google, are you serious about Enterprise Apps or not?

I'm not seeing any significant improvements to Google's basic enterprise applications namely, Mail, Contacts, Calendar, and Documents. They started fast and heavy building out the features or buying companies with existing products but lately, it seems they're in almost maintenance mode. They're far from done and still (way) behind Microsoft Office. So, what gives?

I host my own domain (standard) on Google and use it for personal needs. I love the Calendar app, the sharing features in Calendar and Documents, and the corresponding apps for my Android phone (wish my iPad had the same apps).  There's enough functionality to satisfy me "the consumer" but not me "the enterprise user". 

For the enterprise, I look at every application from 2 perspectives, UI and API. You need both for a great UX. Users want a great UI and all their apps to work seamlessly together. Google's UI is improving, albeit slowly, but their APIs have stalled. There hasn't been any new versions in over a year now. The APIs were just OK to begin with so you'd expect to see a ton of improvements. Better support for structured data and a usable search capability are just a couple that come to mind. Can you believe that you still cannot search contacts using name, email, or phone #?

I like the idea of hiring a team to develop an application they've never built before. They're not bound by previous experiences so they can create a new (and hopefully better) experience.  BUT, at some point you'll need to hire some folks with experience to complement the team and build out the features users need. 

Wednesday, November 24, 2010

Are you developing API for your APIs (API4API)?

Huh!? Enabling automation of API consumption and adapting to application changes will play a crucial role in your API adoption. It's great to have your API's documented but it would even be greater if you allowed your API (metadata) to be introspected programmatically.

The new trend in applications (and API) is all about on-demand and self-service. Web 2.0 (Cloud) applications are more adaptable and powerful. Being able to integrate applications quickly will be key to their success. If you're only providing hard-coded documentation, then you're a speed bump to your application's success.

Exposing a set of standard metadata API for your APIs will allow developers to build tools to auto-magically consume your APIs and keep up with changes much more efficiently. Integration tool vendors often write connectors for applications which allows them to expose the various API in a standard way in their tools. Customers then use those tools to build out their integrations without having to worry about the low level details of every API call they plan to use. If you expose a standard set of APIs that would allow these tools to auto-discover your APIs and auto-react to changes in your application schema and services, your API consumers will love you for it!

When WSDLs were created, every one cheered for the fact that there was a standard way to describe interfaces. You can programmatically consume it and the very least, create stubs to provide/consume the service. It made getting started easier. Unfortunately, most implementation that I encountered only offered statically generated WSDLs that either only changed when the application was upgraded or required frequent exporting/importing to capture on-demand customizations. Painful at best. Now we have REST-ful services which are easier(?) to consume but you 1st have to read the often lengthy documentation to get started. I don't see anyone running to support WADL (WSDL equivalent for Rest-ful services). Do we need another static interface definition?

Forget WSDLs and WADLs! I propose, we create a standard metadata API for APIs (API4API). The key is to keep it simple so it will be adopted. All it needs to do is to describe objects and services you're planning to expose. For every service, defines the inputs and outputs. If you add more objects or services, a simple call would reveal them. It should also indicate the Authentication methods that are supported (e.g. Basic Auth, OAuth).

If your applications are evolving, If your application are customizable, then you need to give developers the ability to create dynamic integrations to keep up. 

Wednesday, October 27, 2010

Why Yammer can't compete with Chatter!

Security! I'm not talking about general privacy and security that all SaaS vendors provide like SSL, data center security, etc. I'm referring to much more granular level of data access control which is defined and controlled at the enterprise application layer. Most companies want to control data that their users access. Take a SFA system as an example. You typically see policies that allow sales reps to  see information about their own accounts and opportunities (only), sales managers to see data belonging to their teams, regional managers within their region, ... so on and so forth. So, you'd only want your users to see Enterprise "Tweets" that they should see.

Independent Enterprise Social Networking services, like Yammer, lack visibility into external system data and the data access control policies in those systems to control who should see what. Sure you can create logical groups and confine private posts to those in the group. You can even integrate in SAP, Salesforce, Oracle EBS, etc. as members of a group but you're still not going to know which specific data updates some one should have access to, at least not very easily and not without external and custom code.

Application administrators go to great lengths to set access control rules so it would be great if the Enterprise Social Networking service could enforce those same rules. Unfortunately, they really can't. First,  they would need access to the rules which is not easy to come by (not their fault). Second, they'd need a standard way of defining the external application objects and access control rules which doesn't exist today (again, not their fault). 

Chatter has an unfair advantage by being native to Salesforce. The same data access control rules you configure for your objects are also valid for Chatter feeds. Users can only subscribe to data that they have access to. No additional work required! However, Chatter will the have the same limitations as Yammer when it comes to data objects external to Saleforce. Salesforce does provide for the flexibility of adding custom objects and associated access control rules but you'll then need to make sure you synchronize the data with your custom object(s).

I'd say that CubeTree's sale to SuccessFactors was certainly a good move on their part. Once they're integrated into SuccessFactors, they'll be in the same position as Chatter but for HR related data objects contained within SF modules.

I hope you don't take this as a beat down on Yammer and like services but when it comes to Enterprise Social Networking, they're going to lack the necessary controls to be widely adopted.

Monday, October 18, 2010

Salesforce.com Chatter: Tips for Integrating External Enterprise Events

Integrating external events can be very straightforward (from Salesforce.com's perspective). Here are a few tips to help get you going quickly.

1. Every parent object in Saleforce is Chatter Feed enabled. By default, the standard objects have been configured to create Chatter events (or feeds) as result of creates and updates (of certain key fields). To make changes, go to Setup>Customize>Chatter>Feed Tracking. You can configure feed tracking for each object (custom included) to track changes to standard and custom fields. For more information, check out this link.

2. Any existing external data integration into Salesforce can instantly be Chatter enabled. Simply go to the feed tracking page for the object of interest and enable tracking for the fields you're updating.

3. For child objects like the Opportunity Line Items, you'll notice that there's no feed tracking capability. This makes sense as you would typically want to track an Opportunity as a whole and not just individual line items. If you're updating the line items with external data such as a "Ship Date" from your ERP, add an extra step to your process to create a Chatter Feed post for the line item's parent opportunity. It's a simple Create call to add a single record to the FeedPost object.

4. Avoid creating additional custom objects and synchronizing external data if all you're interested is being notified of updates from your external applications. It not only creates more work to create and maintain the integrations, it will also require users to subscribe to additional data objects. If you only need to present data from the external systems to the users, consider building a quick mashup instead.

5. Avoid creating a User Update or Post on behalf of an external system as it will potentially bypass the security rules you have in place and may expose sensitive data to your entire org. Since users can only subscribe to Chatter events for data they have access to, it would be a best practice to create feed posts associated with specific data objects.

Sunday, September 12, 2010

Application Echo: Common Bi-directional Synchronization Side Effect

You set up 2 integration flows running in opposite directions that capture change events in 2 applications and replicating it to the other. Sounds easy enough, till you deploy the integrations and make a single change in one application. The 1st integration flow gets notified of the change and updates its target application. As a result, the 2nd integration flow gets notified of the change and updates its target application. Which causes the 1st integration flow to see the change....so on and so forth till you finally pull the plug and break the cycle. How do you stop this infinite application echo from happening?

You can implement a method which queries the target system 1st to see if the data in the target is different from the source and only apply the update if it is. While this seems like a good approach, it's not very efficient. It requires an additional query and field by field comparison for each integration flow which can be the cause of performance issues and can certainly put additional load on the target applications.

A simpler and less costly method would be to create specific user credential on each application that is only used by the integration flows to login. When an update triggers the flows, a simple check to see who last modified the record will let the process know to proceed or to stop.

Monday, August 2, 2010

What's a connector?

Connector is one of those over used words when talking application integration these days. Depending on who you ask, you’ll get different interpretations.

Application developers typically talk about connectors as the piece in between that connects 2 different applications together. For example, Salesforce to SAP connector which was commissioned by Salesforce to sync accounts between the 2 applications. When application developers refer to connectors, they usually are talking about specific point-to-point integrations that only implement 1 or more limited use cases.

When application integration developers talk about connectors, they are only talking about the piece that makes the calls to one application. It usually involves wrapping the APIs exposed by an application and exposing it in a uniform or standard way within the integration platform. It’s usually (but not always) use case or module  independent. In case of Salesforce and SAP, you’d have 1 Saleforce and 1 SAP connector. If you build an account to customer sync, that’s just called the integration which will use the connectors as means to get data in and out of the applications.

Connectors are some times referred to as adapters but that doesn’t change the meaning.

By the way, if you're looking for the Salesforce to SAP connector, it's no longer available from Salesforce.

Friday, July 9, 2010

Cloud-based Message Queueing and Persistence

I think given the choice, most developers rather deal with synchronous vs. asynchronous processes. It’s just much easier to wait for an immediate response and continue down your path vs. having to queue requests and set up a secondary process to poll for replies, perhaps correlate them to the original requests and then process them. However, since you can’t always control every application you need to work with, you will find that having a place to temporarily park messages for either later processing or pick up is necessary. You some times need UPS to retry delivering your package and some times you rather go pick it up at the distribution center.

As an example, if you ever worked with Salesforce’s Workflow and Outbound messaging feature, you’ve probably had to figure out a way to deal with storing the Outbound Notification from Salesforce while your Great Plains , Oracle EBS,... instance is down for maintenance. Salesforce’s Outbound messaging service will only retry a finite amount of times and then the notification is lost so having a place to temporarily store the notification until your application is back on line can be quite necessary.

For most use cases, you have plenty of options for persisting your messages, from simple database tables to more sophisticated message queueing software. If you’re looking to reduce internal IT costs and headaches, Amazon Simple Queue Service (SQS) can be a good option too. It’s simple to set up, it’s fully managed, and it’s very inexpensive. You have several choices of geographic location for the service, US East/West, EU, and APAC. The API is straightforward enough and since it uses standard Cloud-based protocols (RESTFul and SOAP WS over HTTPS), you can be up and running in no time. 

Amazon SQS uses a distributed architecture with multiple queue nodes and as most Cloud-based services go, it some times takes a little time for all the nodes to sync up and have the expected response to your request. Here are some of the behavior I noticed while playing around.

  • There was almost always a delay (10-30secs) in availability of newly created queues. I would create a queue and then use the list queue service to see if it was there. Same was true when I deleted queues. They showed up in the list of queues for 10-30 seconds after I had a confirmation of deletion.
  • Message retrieval seemed some what random. I would send a number of messages in a row and then try to retrieve them. I would get my messages back but in what seemed random groupings. I never lost a message so that was great news.

Overall, SQS was very reliable and as long as you don’t care about ordered delivery and you don’t that have messages that are > 8KB, it’s a viable Cloud-based choice.

Microsoft’s Azure platform offers a similar service which I haven’t played around with but it looks almost identical to Amazon’s SQS. I guess borrowing is the best form of flattery. The operations seem to be identical (1:1) to SQS and here’s a direct quote from their documentation: 
"A queue can contain an unlimited number of messages, each of which can be up to 8 KB in size. Messages are generally added to the end of the queue and retrieved from the front of the queue, although first in, first out (FIFO) behavior is not guaranteed.“ 
So pretty much the same as Amazon's SQS. It's great when your competition doubles as your PM.

If you’re looking for a Cloud-based message persistence service to complement your Cloud-based integration platform, Amazon SQS and (probably) Microsoft’s Azure both offer worthwhile solutions.

Saturday, June 19, 2010

Collaboration in Web 2.0 Era is changing. Are you keeping up?

If you haven't noticed lately, the way we communicate is evolving. Phone calls and emails are losing ground to IM, SMS, Tweets, and Wall Posts. This is true in both social and business networks.

While Tweets and Wall Posts are great (only if you're insanely bored) for keeping up with every minute action of friends and family and have been used by businesses for B2C interactions, they lack the privacy and control required for inter-enterprise or B2B collaboration. 

Also, when it comes to the business communication and collaboration, we've been missing an important collaborator, namely, the business applications. That's right, we need Salesforce.com, Eloqua, SAP, Oracle Apps,.... to proactively communicate events and collaborate with us too. It's a symbiotic relationship that up until now has not been working very smoothly.

To solve this problem, Salesforce introduced their latest and greatest innovation, Chatter, at Dreamforce '09. Despite the lengthy introduction by it's passionate creator, Marc Benioff, it was clear that Salesforce is onto something that will have a big impact on how we work every day. With Chatter for Salesforce, you can keep track of any changes you subscribe to without having to set up complex workflow rules or repeatedly check the object(s) of interest for changes. Marc Benioff is admittedly copying Facebook and Twitter's model and applying it to the business world.

Seems like Lars Daalgard has his eye on the ball too. Last month, SuccessFactors acquired CubeTree, a business/social networking provider. Where as Chatter is tightly coupled and integrated into Salesforce, today, CubeTree is not application specific but offers a rich set of APIs that will allow for easy integration. I suspect that it will be natively integrated with the rest of the SFSF suite in the coming Qs.

While Chatter and (integrated) CubeTree will be great for all events occurring within their respective applications, they will require integration to other applications within the enterprise to complete the collaboration loop. And while monitoring data changes in applications can be easy straightforward, you'll need to know which corresponding object in Salesforce (as an example) to relate the change events too. Which means, you'll still need to synchronize your base objects before SAP can send you notifications via Chatter.

For business execution, you need collaboration between people and systems. For systems to collaborate, you'll need to be able to speak their language and translate to human speak.

Thursday, May 27, 2010

Is Your Cloud Elastic or Plastic?

The discussions over multi vs. single tenant Cloud application design seems to be winding down with multi-tenancy clearly the winner. Although customers should frankly not care or be impacted by either architecture, vendors would be wise to follow a path that leads to easier maintenance and lower operational costs. As the trend in SaaS and Cloud computing puts downward pressure on license prices, application vendors need to adopt technologies and architectural models that result in the most cost effective manner to operate a Cloud application. That's assuming you have a for profit or at least against loss business model.

Now, it seems natural that the next big debate in Cloud computing architecture is going to be around elasticity vs plasticity. 

Is your Cloud computing infrastructure able to scale up or down based on demand? or Do you have to plan for peak demand and deploy enough hardware to make sure you have enough capacity to address your demand and SLAs?

If it's the latter, then it's like having a hotel keeping all the lights in every room on just in case some one checks in. How about turning the lights on only when you need them?! The amount of resources (aka moolah) wasted on running infrastructure constantly for peak demand can potentially negate any cost savings achieved through multi-tenancy.

The only arguments that seem to make any sense for plastic computing are with regards to SLAs and the latency involved in bringing on new hardware/software to support increases in demand. To that, I say...invest in some operational analytics. I'm sure if you can track your usage, you can see patterns of use and plan for them. For example, if you're running a sales or finance application, it seems natural that end of month/quarter would be the busiest times and beginning through mid-month, things are not so busy. Turn off the lights when you don't need them.

Monday, May 10, 2010

Can application integration be pre-packaged?

The concept of pre-packaged integrations is nothing new. For years, integration vendors have tried to sell pre-packaged application integrations with various degrees of failure. You might presume that because of these failures, application integration cannot be pre-packaged and but you would be wrong.

Most of the failures were due to:
  1. Complexity of underlying integration platform used
The 1.0 integration products were (are) so complex and convoluted that trying to make changes costs the same as writing from scratch. So why pre-package if it's going to cost the same amount as starting from scratch.
  1. Over architecture of the solution on part of the developers
You see this over and over again. Developers with little customer implementation experience are charged with developing a pre-packaged integration solution. They start by trying to conceive every possible scenario at every customer. From there, there's no coming back. You build and build till you think you have every one covered. Only problem is that now you need a 30 day training course to understand all the moving parts.
  1. Combination of 1 and 2
Integrations can absolutely be pre-packaged. The question that needs to be answered is "To what extent?". The answer is going to depend on 1st the use case, 2nd the applications being integrated, and 3rd the experience of the developer.

Some use cases lend themselves to be 90-100% pre-packaged or "canned". For example, user synchronization. The objects are usually very well defined and tend not be customized per implementation.

Some use cases reach the 70-80% level. Customer master sync is good example. For the most part the customer objects on each side will be standard but you'll find various number of custom fields that need to be included. The integration platform should provide easy method for providing the additional required mappings.

You might argue that applications like SAP which provide infinite amounts of customization capability would be the exceptions to the case but that's not necessarily true. Generally, standard objects are only modified to include new custom fields.

Most use cases can be pre-packaged to at least 50%. The trick is not to over think or over build. Trying to conceive of every scenario and account for it will be impossible. As long as the underlying integration platform provides for easy modifications at implementation time, there's no need to over architect the solution. You can deal with the one-offs at implementation time.


Monday, April 5, 2010

Top 10 Indicators that you're a 1.0 Application Company!

10.The last time you updated you're user interface, Windows 3.1 was your OS.
9.Your UI is still Windows-based.
8.You read "Only the Paranoid Survive" and have locked yourself in a room.
7.Your API strategy is to expose your database tables and stored procedures.
6.You think Cloud and SaaS are only being adopted by small companies with little to no money.
5.You refer to your hosted instance of your application as SaaS.
4.Your idea of rapid customization capability is to lock an engineer in the room with your SaaS hosted instance.
3.You want to adopt a Cloud and SaaS strategy but are worried about eroding your high ASP.
2.You don't have a Cloud strategy yet.
1.You're Cloud strategy is to wait in your room till the fad is over.

Tuesday, March 30, 2010

Why is API development often an after thought?

For most application developers, SaaS and on-premise, this seems to be the case.

There probably isn't a company out there that runs their entire business on one single application from one single vendor. SAP and Oracle would probably like it to be otherwise but even they realize that it's a hard sell. SAP provides one of the better APIs in the 1.0 application space. A true sign of maturity. If your application offers any value to customers, then providing them with the ability to integrate it with other applications only serves to increase it's value not to mention it's stickiness. An integrated application is much harder to replace.

If you already have APIs or a method of integrating your applications, how would you rank it using the simple scale below?
  • Level -1: Our services or engineers can write custom code when needed to export/import data when necessary.
You're in trouble. You know it or not. How can you scale this model? How do you maintain all the custom code?
  • Level 0: Users can export or import a CSV file via a button or link in the application.
This is the bare minimum. Your users can manually import/export data and when ever they want to switch to your competitor, it will be easy for them. If you don't already have a way to get/post the file via HTTP(S), add it so you can at least automate the process.
  • Level 1: We have XML over HTTP APIs. 100s of different operations exposed. As far as a schema, we provide DTDs or you can use the query option to see what elements are available per object so you can manually construct your requests.
Congratulations! You have APIs and can tightly integrate your application with your customers. However, you are certainly putting all the burden on your customers to figure things out. Integrating with your application will cost users more and keeping up with changing requirements or application customizations will require significant effort. Don't be surprised if integration is still an area of concern for your customers and services team.
  • Level 2: We provide Web Services (WS SOAP or REST-ful). We have well defined WSDLs and/or schemas (XSDs, RELAX NG, ...). For every customization in the application, users simply re-download/import the WSDL (or schema). We release a new version of our APIs with every release which you need to immediately adopt.
You're well ahead of most of the market but don't bask in the glory too much. You still have work to do. While a static WSDL or Schema based approach seems easy enough, just ask a developer how much they enjoyed re-importing the schema 20 times because the users were making last minute changes. Consider moving your APIs to the next level.
  • Level 3: We provide meta-data driven APIs via Web Services (SOAP or REST). As users make customizations to the application, they simply have to retrieve the latest set of object meta-data to include any new or changed fields. As new versions are released, users need only upgrade if they want to take advantage of the new capabilities.

You clearly understand the value of APIs and have made it a priority. Thank you!

For those of you wondering who in the 2.0 application world has achieved level 3, take a look at Force.com's APIs. Their APIs are well documented and publicly accessible. Follow the leader!

If you're planning on building or re-building APIs, I suggest reading "The Top 10 Most Common API Pitfalls" too. I have a slightly different view on a few of the points which I'll post very soon.

In conclusion, you have to make API development a core feature and part of your strategy. It's critical to your success.

Tuesday, March 16, 2010

Don't over architect your application integrations!

Simple and agile should be your foremost design considerations in integration. Trying to get applications with different functionality and schemas to talk to each other via different protocols or APIs is complicated  enough, you don't need to add to it. Focus on the business reason why you're integrating the applications. If in doubt, ask your business user or CIO.

Keep these guidelines in mind as you embark on your next integration project:
  • If point-to-point works and its all you need, don't spend time and effort designing a hub-n-spoke model and an all encompassing uber canonical intermediary model. Unless you're the fortune 100 (50?) and have multiple identical applications across your enterprise, most of your integrations will be point-to-point. That's true even if you force a messaging bus (queue) in the middle. Don't confuse your underlying software/hardware architecture with your actual integration architecture. Everything may flow through a single hub (or server) but you can still just have a bunch of point-to-point integrations. If object A from application A only needs to be integrated with object A of application B, then stop there. If you have an integration platform worth it's salt, you can always come back and evolve your integrations.
  • Avoid custom code like your job depends on it, which it will. Custom code is the path to the dark side and the famous Gartner Spaghetti diagram. It's not point-to-point integration that gets you in trouble, it's the lack of a unifying integration platform that leads to Spaghetti. Using an integration platform will give you consistency across design and management of your integrations. If you're being tempted by the dark side to use custom code, remember that running tail on a log file doesn't consitute management and monitoring. If you're getting tempted to build your own awesome integration platform, go back to your business user or CIO and ask them what they wanted again.
When selecting off-the-shelf integration software, keep the following in mind:
    • Pick an integration platform that can deliver 80% of what you need out-of-the-box. Most enterprise application integration problems fall into the 80% category. The other 20% edge cases may need a different solution. Platforms and vendors that focus on those edge cases to differentiate themselves tend to provide the same model for solving all problems, with great pain and difficulty!
    • Make sure what ever solution you choose has solid coverage for the 3 basic types of application integration:
      • Data Synchronization - One time migrations or ongoing synchronizations.
      • Process Integration - Data syncs only get you so far, being able to write integration flow logic tying different business processes together is also important.
      • UI Integration (aka Mashups) - Perhaps the most under rated method of integrating applications. If all you need is visibility to data in another system, why synchronize the data. If you don't need to use the data in the target application for processing or reporting, then don't synchronize it. Most applications these days provide browser-based interfaces with the ability to embed iframes or other methods of incorporating external data sources. "... mashups can deliver an 80% solution at 20% the cost (or less) " (source: http://blog.programmableweb.com/2009/07/23/enterprise-mashups-continue-to-gain-momentum-as-part-of-enterprise-20/).
  • Apply standards when they make sense and avoid them when they only serve to complicate solutions. Having adopted SOA doesn't mean that every thing needs to be exposed as a service. It certainly doesn't mean that you should force adoption of overly complicated WS* "standards". Stick with the basics that you really need (vs what would be cool). You'll find varying levels of WS* "standards" adoption in the application market place. SOAP-based or REST-ful? What ever works. You'll find that you'll have to be able to support all kinds of different "standards". Make sure your integration platform is flexible.