Posts Tagged ‘search’

TNR Global Launches Search Application for Museum Collections

by Karen Lynn

TNR Global Launches Search Application for Museum Collections

TNR Global is launching the alpha version of a search application designed specifically for museum collections. Museum Collections Search is any application for digitally searching a museum’s collection. This can be made available to the public or used by only the internal staff for curation, and can be made available to a selected professional or research audience. Our White Paper explains the application in more detail.

Collections Search adds tremendous value to the research community and is often in line with the educational mission of many museums. A search feature is a resource for students and researchers, and can expand the overall audience by reaching people separated by distance or with limited physical mobility. When the public finds items in your collection like historic letters, photographs of items, and other catalog items through you search function, it can increase interest and traffic to the museum’s site and physical collection.

While the ability to search a museum collection for is a way to bring immense value to the museum and the community that supports it, intellectual property is an ongoing concern for curators within the museum community. TNR Global recognizes this and has technologies to address access to material. When setting up a search, ease of use or understanding and responsiveness are addressed, also issues of ownership or privacy all combine to determine the search technology chosen and how it is applied. The search realm and results can be tailored based on the user. By defining the audience (or audiences) for the collection and the search, we can structure the presentation of the results. The public view can be a more restricted display, while protected view and can be more expansive and detailed.

We use open source search technology that works with most museum software systems and databases including the popular museum software product PastPerfect. We customize our search solution specifically for your collection and optimize search results for the most relevant results to queries.

TNR Global has a long history with the museum community. Our CEO is Principal of the organization We Love Museums and is a member of dozens of museums worldwide. He is involved with a number of archival and curatorial indexing projects. He has merged his lifelong career in database and web technology with his passion for art, education, and history with creating a search solution to benefit museums and their patrons. To get started, contact us for an evaluation of your Museum Collection Search Project today!

The Future of Search Doesn’t Come in a Box: The Google Mini Says Goodbye

by Karen Lynn

The future of search doesn’t come in a box.

Last week while many were on vacation, Google abandoned the smallest member of its’ Search Appliance family, the Google Mini. The small blue piece of external hardware was used for smaller data sets with a stable, some might say stagnant, data with slow and steady query rates. If you were a smaller business with search demands that weren’t, well–too demanding, then this piece of hardware could help you for a reasonable price tag.

Search evolves like all technologies do. Developers incorporate emerging technologies into their skill sets, and open source technologies like Lucene Solr have matured into a competitive option for companies of all sizes. IT managers are finally ready to move away from the confines of a Search Appliance in a box and move to a more agile solution that can offer room for growth, a lightweight application, and a healthy and growing community. Without the hefty annual licensing fees of a commercial product, Solr can save small to mid sized companies and startups valuable cash resources to invest in other areas of their respective businesses.

Open source technologies aside, many are speculating if Google will retire some of its other pieces of hardware like the well know GSA (Google Search Appliance). Although Google has a newly released version 6.14 with an updated website to easily explain features. Google continues evolving its enterprise search offerings to include a hosted search solution for e-tailers called Google Commerce Search, along with their standard Google Site Search. Neither of these products come in a physical blue or yellow box, and I wouldn’t expect Google’s next innovation to either.

There’s plenty lively discussion about this in the Enterprise Search Professionals discussion board on LinkedIn.

Elasticsearch Evaluation White Paper Released: Promising for Big Data

by Karen Lynn

There are many new technologies emerging around search, and we’ve been investigating several of them for our clients. Search has never been “easy” but Elasticsearch attempts to make it at least easier. Elasticsearch is billed to be “built for the cloud,” and with so many companies moving into the cloud, it seems like a natural that search would move there too.  This paper is designed to show you just how Elasticsearch works by setting up a cluster and feeding it data.  We also let you know what tools we use so you can test out the technology and we include a rough sketch of code as well. Finally, we make conclusions about how Elasticsearch can help with problems like Big Data and other search related uses.

Elasticsearch is an open source technology developed by one developer, Shay Bannon. This paper is simply a first look at elasticsearch and is not associated with an additonal product or variation of elaticsearch. The appeal for big data is due to elasticsearch’s wonderful ability to scale with growing content, which has largely been associated with the “big data problem” we all keep hearing about. It’s very easy to add new nodes and it handles the balancing of your data across the available nodes. It handles the failure of nodes in a graceful way that is important in a cloud environment. And lastly, we simply evaluate and test the technology. We really don’t believe there is a one size fits all technology in the realm of enterprise search, it is really highly dependent upon your systems, how many documents you have, how much unstructured data you have, and how you want your site to function. But that said– in terms of storing big data, it is as capable as any Lucene based product; it can handle a much larger load that the current Solr release as the notion of breaking the index up into smaller chunks is “baked in” to the product.

Here is an except from the paper:

“Products like Elasticsearch that lack a document processing component entirely become more attractive. In fact, most projects that involve a data set large enough to qualify as “big data”³² are building their own document processing stages anyway as part of their ETL cycle.”

If you are interested in downloading this free White Paper, sign up with us here.

If you would like help using Elasticsearch with your search project, contact us.

Selling Search Internally–Part 2–How to get buy in from the staff

by Karen Lynn

You’ve convinced the powers that be that a search solution is a necessary strategy for success and competitive advantage. Congratulations! Nice work. Think your job is done? Not by a long shot.


Ask your staff–what would a good solution look like to them? After you’ve decided to move forward with a search solution, it’s important, no–it’s crucial that you consider strongly the end user. If you have a web portal that you manage, it’s worth polling your typical customer to gather vital data on how they want their experience to be. If you are looking at an enterprise search solution, you need to spend time exploring what your staff wants and needs out of a solution, and ensure your search solution addresses design for them….not a boilerplate solution that only meets some of your needs. Search is an expensive endeavor, if you’re spending the money, you might as well get exactly what you want.


The truth is that if your end user of the solution doesn’t like the solution, they won’t use it. So getting the end user involved in the planning stage of the search project is vital to it’s overall success. If they have input to it’s overall features and design, they will be more invested in using it. Involving users manufactures all kinds of good-will collateral that can help develop better morale and a positive workplace. Doing this early in the process also introduces change more slowly to users–and people rarely react well to lots of radical change.  Making them a part of the process and doing it early with lots of prepping for change can affect overall satisfaction rates with the search implementation after it’s complete.


Once the implementation actually goes live, you’ll need to ensure a training plan is in place and executed to ensure ongoing success.  A successful search solution isn’t just done once it’s implemented.  You need to work to include your whole team in the training process, and allow them to see for themselves how the solution is going to help them in their day to day tasks. If you included your staff in the planning of the design from the beginning, you’ll be much more successful once the solution is deployed, because they were part of the solution all along.

Search and Steel Girders

by Karen Lynn

“Search ties people together…”

This was one of the many themes at the Enterprise Search Summit in Washington, DC last week. It seems like a fairly obvious statement, but it quickly becomes part of the landscape, taken for granted even though the landscape couldn’t function without it. I have compared search function to the steel girders of a skyscraper. When you walk into the building, you aren’t thinking about the beams holding the building up or connecting floors, but without them, you wouldn’t have a building at all (you couldn’t even find the lobby). Other metaphors overheard include oxygen (invisible yet essential), sunlight (lest we remain in the dark) and electricity (everything stops without it).

Attendees of the conference know how important search is to companies, but increasingly, companies are taking search for granted. There is a fundamental gap in communicating the importance and difficulty of implementing a good search platform.

Companies who need search to run on their website or intranet, expect search to work as it does on the Internet, but this is an apples and oranges scenario.

Here are the main disconnects:

  1. Search is easy
  2. Search is cheap
  3. It never has to be touched again

People expect search inside the firewall to function much like Google does outside the firewall. Google exists for end users and is really, really incredible. It Geo-locates, it auto-completes. It uses your browsing history to provide more relevant results. And you had no financial investment in using this really lovely, elegant, useful tool that doesn’t just assist your Internet experience, but facilitates it. But behind the firewall, things are different. Let me explain.

  • Your business content isn’t publicly available or known. I mean, that would be bad, right? It’s behind the firewall for a reason. So keeping it there yet allowing your staff to access certain levels of information takes some architecture and planning.
  • Google has thousands of developers working on this beautiful, incredible technology every day. They finance this by ad content. How many people do you have on your search team? And how much of their day do they really spend on search? What department is being billed for it? Business leaders need to embrace this as a necessary cost of doing business and budget accordingly, or face the crippling result of staff and customers not being able to find the information they need.

  • 80% of your content is unstructured. Meaning, search engines can’t really read it until some love and care is put into cleaning the data. This is a vital, yet time intense process. Our VP of Search Technologies Michael McIntosh says “We spend about 90% of our time on the document processing pipeline, conditioning data to be fed into the engine.” Moreover, unstructured data isn’t a set number. It’s being creating faster than you can blink by your entire enterprise. Processing it is never a done deal.


So if search connects us, hopefully this finds you thinking about search in more realistic terms. Search by itself may look like a simple box, but behind the box is a foundry of girders, cross beams, and structural support that allows you to find what you need to “make money outside the firewall or save money inside the firewall.”

Living with Bad Enterprise Search: The Costs of Not Finding What Your Business Needs

by Karen Lynn

Do you remember TV Guide? There was a time when TV Guide sat on nearly every coffee table in every living room in America. If you didn’t have a subscription, you would grab it in the checkout line at the grocery store every week. If you wanted to plan out your evening in front of the tube, you would pick it up, thumb through it, read the synopsis of the show, and make an informed decision about watching Dallas or Falcon Crest that evening.


Then everything changed. Not overnight, but let’s fast forward to today. If you are 20, you don’t know what TV Guide is. Most cable packages have a guide built in so you can plan your viewing, record shows you will miss, or call up ones you want to watch, even from last season. Schedules for networks are posted online. And it’s a good thing, because back when TV Guide sat on our coffee tables, there were three networks. How many are there now? Imagine how thick that TV Guide would be.


The explosion of content is not exclusive to television. Businesses have had an estimated 60% growth in digital content per year, and it shows no signs of stopping. Unfortunately, a lot of businesses haven’t upgraded their cable box, so to speak. They are looking for crucial documents and data on a manual dial. The truth is, companies have been living with bad search for a long time. And they’ve been paying for it.


The IDC estimates that 2.5 hours a day per employee are wasted looking for information they need to perform their job, or recreating that information altogether. Additionally, making sound decisions depends strongly on having valid information to make those decisions. Without access to information, bad business decisions are made, and bad business decisions are deadly to the enterprise. Business intelligence efforts can fall short without the right search platform powering fast relevant results. Worst of all, if your customers cannot find the product or service they need on your system, they will go somewhere else for it.


Content Management Systems are gaining in popularity, but what’s powering the search? How well does it deal with unstructured content? Does it give results with the relevance you need to make the best decision? Can your employees find what to need to execute their tasks? Can customers find your products?


Search technology is critical to the mission of any business. It facilitates cash flow, revenue, Business Intelligence (BI), productivity and employee satisfaction. It has an immediate impact of the bottom line of the business. It is an essential ingredient to the successful enterprise on so many levels, to run a business with inadequate search technology is like using an old copy of TV Guide to try and find and decide what to watch.

If you are assessing your search platform and it’s bottom line impact on your business, contact us.  We can analyze your systems and provide a free consultation on the best enterprise search solution for your company.

Building for Enterprise Search: A Systems View, Part 2

by Karen Lynn

When we left off, Michael Klatsky, VP of Systems Administration was telling me how important communication between the systems side and search side of is to developing an enterprise search solution. The process of building, testing, monitoring, adjusting, more testing, and more monitoring ensures systems function that way they are intended to function. Let’s resume our conversation where Michael discusses the tools he uses to ensure the system he’s building works the way the client wants it to. This is the second portion of a two part blog post.
*********************************************************************************************************************
Tools for BDD: Part 2

Karen: It’s sounding like the Search Team and Sys Admin Team need to have a good relationship and communicate often to ensure the system will accommodate the work the search team does.

Michael: Yes, search sometimes has to construct their scripts to conforms to systems. Testing is run on both sides, but small changes can affect others down the line, so it’s important to incorporate expected behaviors into modeling and monitoring on both applications and systems sides and how they interact with one another.

Karen: How do you make sure that happens?

Michael: We’re exploring some tools to help us make sure the machine will act just as we expect it to, like cucumber and cucumber nagios We’re using certain tools to facilitate the systems behaves in the way that we expect it to. We’re exploring cucumber for basic modeling and for testing. Cucumber is cool for testing because it returns values to you in colors. Red, meaning it failed, yellow meaning there’s problem, and green meaning its good. According to their docs, they instruct you to “keep running it until it’s a cucumber.”

Karen: Ah, I get it.

Michael: Right. And what cucumber nagios does is it takes cucumber and allows you to create a nagios monitoring check script. So if you pass, great, if you god red, nagios will throw an alert to the systems administrator so we have an opportunity to fix it before more is built.

Karen: Sounds like it’s an attentive way to build a system.

Michael: The only way to scale is to have machines do things for themselves. That’s the way to do it.

Karen: To automate.

Michael: Yes. Automation. Not to just set things up to automatically do configuration management beforehand, but to test afterwards to determine that your machine is behaving just as you (and your client) envisioned it.

For more information on how you can plan your enterprise search in cooperation with your systems administration team, contact us for a free consultation.

Open Source Search: Isn’t It Expensive?

by Karen Lynn

You’ve heard the debate on open source search vs. proprietary search. One question that constantly comes up for prospective clients is “What’s all this going to cost me?”

In these times, it’s a good question. Because proprietary has neatly packaged, practically shrink wrapped plans, it’s much easier to discern how much you will spend on a solution. But how much will it cost? That’s an entirely different question.

I see you cocking your head sideways.

Proprietary search has hidden costs. What if the software doesn’t perform the way you need it to? Does the software understand the nuances of your business? How adaptable is it? How much will it cost to adapt that software to get it to perform the way my business needs it to? Questions like this need to be asked, and answered. Eventually you will ask yourself….why am I paying for all of this? And your developer will ask, “why can’t I access the source code?”

What I’m getting at is this: It is a reassuring feeling for a customer to see what a package costs, to understand what services you will get with a solution, and to anticipate what the licensing fee will cost on an annual basis. If it’s your job to research a solution and present findings to your executive team to make a decision, then proprietary search, on the surface, seems a more secure choice. But rarely, if ever, are these solutions a perfect fit for the customer. It’s like buying a Ferrari, with all the brand recognition and polish a Ferrari offers, and not ever driving it past second gear, or cutting the wheel more than 15 degrees, or getting a chance to have your trusted mechanic look under the hood. This is why open source is such a good solution for businesses who want their IT to move quickly.

We’re hearing more buzz about companies waking up to the agility of an open source solution. Most recently, with the acquisition of Autonomy by HP, the industry is telling stories of ex Autonomy customers migrating to Solr (open source search) with only the annual licencing budget to finance the migration. Without an annual expenditure of cash for licensing, and the freedom of not being under a licensing agreement, companies quickly recoup the initial expenditure of a migration.

What kind of car does your company drive?

If you are examining the different choices for implementing search technology in your organization, contact us.  We’re happy to talk to you about the best solution for your business.


Continuous Integration for Large Search Solutions

by Karen Lynn

Managing large projects takes a smart approach and some intuitive thinking. One project we are currently engaged in is with large publisher of manufacturing parts. This has been an extraordinary project due to its scale and ever changing scope. I spoke with our VP of Enterprise Search Technologies, Michael McIntosh about how TNR Global handles complex projects.

Karen: This project is a big one. Tell me more about the site’s function. What is the focus?

Michael: Product search is the focus. The site contains tens of millions of documents, both structured and unstructured content. They also have a huge amount of data provided by the advertisers and the companies themselves on products that they sell. One of the advantages we have over a search engine like Google is access to a vast amount of propriety data provided by the vendors themselves.

Karen: Tell me about how you are managing the project.  What are some of the variables you work with?

Michael: With this particular project, we are dealing with many different data feeds. There are many different intermediary metadata stages we have to generate to support the final searchable content.  The client also changes their business logic frequently enough that if it takes a month or more between data builds its likely something has changed. For instance, they might have changed an XML format or added an attribute to an element in the data feed that will break something else down the line. The problem is there are so many moving parts, it’s almost impossible to do it manually and always do it correctly.

Karen: What other kinds of business logic changes are you dealing with in top of the massive amounts of raw data?

Michael: Most of the business logic changes are when they need to modify how something behaves based on new data that’s available, or when they need to start treating the data in a different way.  Sometimes there is a change in the way they want the overall system to behave. They sometimes have some classification rules for content they like to tweak occasionally.

Another thing we consider is the client’s relevancy scoring and query pre-processing rules. So you need to consider if you issue a query and it fails, what happens then? What kind of fallback query do you use?  All these things are part of the business logic that is independent of the raw data. In summary, we have the raw data and we can do a number of things with it. They often want us to change exactly what we’re doing with it, how we’re conditioning it, and how we’re transforming it. We either tweak what exists or take advantage of new data that they’ve started including in their data feeds. The challenge is all these elements can change frequently.

Karen: This site is more of a portal than strictly an enterprise search project, isn’t it?

Michael: Yes. Enterprise search usually refers to searching for documents within an organization. This client is a public facing search engine that allows the public to perform product search across a very large number of vendors and service providers.

Changes come from their advertisers and data they provide. Advertisers come and go. People pay for placement within certain industrial categories. It’s not like we get a static list of sites to crawl and that’s that. It changes weekly, sometimes daily. This list of sites we crawl is on a weekly or daily basis. Also things need to be purged from the index. Say an advertiser’s contract ends and suddenly we need to stop crawling a site with thousands documents; that data needs to be purged from the index promptly. Not only do we have to crawl new sites but purge old ones as well. This is a project that is so massive that it’s not cut and dried. A lot of software development projects focus on a clear cut problem, come up with plan, tackle it, release it, and then maintain it. We’re constantly getting new information and learn new things about people hitting the site.

Karen: So this sounds like this project is always in a state of ongoing development.

Michael: We are building something that’s never been built before. One of the goals is to make this site remarkable. And we’re very excited to be a part of that. The scale of the project is quite big though, which is why we started using Continuous Integration.

The way our cycles work is we perform big data updates, but by using CI, we can continuously update and integrate new data. We’re moving to a place, by using the practice of CI, we can perform a daily builds which gives us the time we need to fix problems before we absolutely need it to be live.

Karen: How do you implement CI into your day to day management of the project?

Michael: There are some pretty great open source tools that we’re using to implement CI. We use Jenkins to help us do Continuous Integration for frequent data builds, which is an intensive process for this particular client.

We field questions from the client about the status of different data builds. We hope to use Jenkins in conjunction with other tools to automatically build data and have event-based data builds. We’re looking at a way to have it triggered by some other event and have Jenkins automatically generate reports as the data is being built. Each time we run a build script, if the output differs from the previous build, Jenkins makes it easy for you to see that something is different. There is a way to modify your output that Jenkins can understand.  One of the cool things about Jenkins is they have graphs that illustrate differences to help us identify issues that could pose a potential problem and let us fix it before we need to go live with the data.

Karen: Any other tools?

Michael: For multi-node search clusters, we’re using a tool called fabric3 that uses SSH to copy data and execute scripts across multiple nodes of a cluster based upon roles. We have a clever set up where we’re able to inform fabric3 what services are running on each node in our cluster and have actions or commands linked to certain tasks, like building metadata.  By linking them, they automatically know which nodes to deploy data to.

Using open source tools like Jenkins and fabric3 make it a lot more manageable considering the large number of moving parts. It’s allowed us to be successful in building this incredible site and making the search function relevant, accurate and up to date.

Cloud Platforms: The Promise vs. The Reality

by Karen Lynn

Recently our VP of Search, Michael McIntosh sat down and talked to me about his thoughts on cloud computing and what businesses should be aware of when investing in the cloud.


Karen: So, how does enterprise search and cloud computing fit together?  What’s good about it for companies?

Michael: The advent of cloud computing makes it a lot easier for companies to get into search without investing a huge sum of money up front. Some of the pay-as-you-go computing approaches make it possible to do things that in the past wouldn’t have been financially viable such as natural language processing on content.  Something that could have taken days, weeks, or even months can now take much less time by throwing more hardware at a problem for a shorter time span.

For example, you could throw 20 machines at a problem for 12 hours and do a bunch of computations in a massively parallel way, and then stop it as soon as it’s done….versus the old model where you have to buy all the hardware, or rent it, and make sure it’s not underutilized so you make your investment back.

But if you need a lot of processing power for a short amount of time, it’s really quite amazing what we can do now with an approach like this.

Karen: Is this a new technology for TNR?

Michael: TNR has been using cloud computing platforms for several years now—3 or 4 years.  Cloud computing in itself is sort of a buzz word, because distributed processing and hosting has been around for a while, but the pay-as-you-go computing model is relatively new. So we have a great deal of experience with the reality of cloud computing platforms vs. the promise of cloud computing platforms.

Karen: So, what is the difference between the “promise” and the “reality” of cloud computing platforms?

Michael: Well, A lot of people think of cloud computing as this magical thing; all their problems will be solved and it will be super dependable because there are very large businesses like Amazon running the underlying infrastructure and you don’t have to worry about it.

But, as the physical infrastructure becomes easier to deploy, other critical factors come into play. You won’t have to worry about the physical logistics of getting hardware in place. But, you will have to manage multiple instances, you have to make sure that when you provision temporary processing resources, you have to remember to retire it when it’s no longer needed. Otherwise you’ll be paying more than you need to. Since virtualization uses physical hardware you do not control or maintain—there are fewer warning signs to a potential systemic failure. Now Amazon, which is the one we use the most, does a good job of backing up instances and making things available to you even when there are failures. But we’ve had problems where we’ve lost entire zones. Even if we’ve had multiple machines configured with fault tolerance, Amazon has experienced outages that have taken entire regions offline despite every conservative effort to ensure continuous up time. So we’ve had our entire service clusters go down because of problems Amazon was having. It becomes critically important for companies to develop and maintain a disaster and recovery plan. Companies need to make sure things that are critical are backed up in multiple locations. Now historically, this has been hard to do because companies typically buy enough equipment for production needs, but not enough equipment for development and staging environments.

Karen: That sounds like a costly mistake.

Michael: It can be very costly because people often develop disaster recovery plans without ongoing testing to confirm the approach continues to work. If the approach is flawed, when you do suffer an outage, you can be offline for hours, days or weeks. Even worse, you may not be able to recover your data at all.

Karen: That sounds extremely costly.

Michael: Yes, it’s no fun at all.

There are upsides though. Some pluses are that cloud computing forces you to be more formal about how you manage your technical infrastructure. For example, for training purposes; with a new developer, we can just give them a copy of a production system, and have them go to town on it, make modifications, whatever without risking the actual production servers. And if they make a mistake, which is human (you have to factor in human error), you can reprovision a brand new one, and retire the one that is fouled up. Instead of having to spend hours and hours trying to fix the problem on the machine they were working on.

Karen: This sounds like it’s a lot more flexible and time efficient, with a layer of safety built in.

Michael: Yes. Cloud computing also comes in handy if you ever have a security breach. If a hacker gets into the system and the system is compromised–if this happens, system administrators can go in and try to correct the problem. But hackers can often install backdoors to get in and out. So a cloud platform with a good disaster contingency and backup can allow system administrators to bring a whole instance down and do the patch on a whole new machine without the security breaches and patches in place. This is pretty easy to do with a cloud platform.

Karen: So TNR can help their clients do all these things?

Michael: Yes, we’ve worked with large customers over many years and we’ve seen a wide variety of things that can possibly go wrong, and we’ve been through several physical service outages both with Amazon Web Services and with Rackspace.

Cloud computing in itself is no panacea, but if you have the technical and organization proficiency to effectively leverage the platform, it can be a powerful tool used to accelerate your company’s rate of innovation.

If you are assessing the cloud as a solution in your business, contact us.  There are a variety of options for hosting that can save your company money and minimize outages. Let us show you the option that is the best fit for your organization.