Skip to content Skip to sidebar Skip to footer

Regular Expression To Parse Links From Html Code

I'm working on a method that accepts a string (html code) and returns an array that contains all the links contained with in. I've seen a few options for things like html ability p

Solution 1:

If you are looking for a fool proof solution regular expressions are not your answers. They are fundamentally limited and cannot be used to reliably parse out links, or other tags for that matter, from an HTML file due to the complexity of the HTML language.

Instead you'll need to use an actual HTML DOM API to parse out links.

Solution 2:

Regular Expressions are not the best idea for HTML.

see previous questions:

Rather, you want something that already knows how to parse the DOM; otherwise, you're re-inventing the wheel.

Solution 3:

Other users may tell you "No, Stop! Regular expressions should not mix with HTML! It's like mixing bleach and ammonia!". There is a lot of wisdom in that advice, but it's not the full story.

The truth is that regular expressions work just fine for collecting commonly formatted links. However, a better approach would be to use a dedicated tool for this type of thing, such as the HtmlAgilityPack.

If you use regular expressions, you may match 99.9% of the links, but you may miss on rare unanticipated corner cases or malformed html data.

Here's a function I put together that uses the HtmlAgilityPack to meet your requirements:

privatestatic IEnumerable<string> DocumentLinks(string sourceHtml)
    {
        HtmlDocument sourceDocument = new HtmlDocument();

        sourceDocument.LoadHtml(sourceHtml);

        return (IEnumerable<string>)sourceDocument.DocumentNode
            .SelectNodes("//a[@href!='#']")
                .Select(n => n.GetAttributeValue("href",""));

    }

This function creates a new HtmlAgilityPack.HtmlDocument, loads a string containing HTML into it, and then uses an xpath query "//a[@href!='#']" to select all of the links on the page that do not point to "#". Then I use the LINQ extension Select to convert the HtmlNodeCollection into a list of strings containing the value of the href attribute - where the link is pointing to.

Here's an example use:

List<string> links = 
            DocumentLinks((new WebClient())
                .DownloadString("http://google.com")).ToList();

        Debugger.Break();

This should be a lot more effective than regular expressions.

Solution 4:

You could look for anything that is sort-of-like a url for http/https schema. This is not HTML proof, but it will get you things that looks like http URLs, which is what you need, I suspect. You can add more sachems, and domains. The regex looks for things that look like URL "in" href attributes (not strictly).

classProgram {
    staticvoidMain(string[] args) {
        conststring pattern = @"href=[""'](?<url>(http|https)://[^/]*?\.(com|org|net|gov))(/.*)?[""']";
        var regex = new Regex(pattern);
        var urls = newstring[] { 
            "href='http://company.com'",
            "href=\"https://company.com\"",
            "href='http://company.org'",
            "href='http://company.org/'",
            "href='http://company.org/path'",
        };

        foreach (var url in urls) {
            Match match = regex.Match(url);
            if (match.Success) {
                Console.WriteLine("{0} -> {1}", url, match.Groups["url"].Value);
            }
        }
    }
}

output:

href='http://company.com' -> http://company.com href="https://company.com" -> https://company.com href='http://company.org' -> http://company.org href='http://company.org/' -> http://company.org href='http://company.org/path' -> http://company.org

Post a Comment for "Regular Expression To Parse Links From Html Code"