There are lots of things crawling the web poking at sites trying to cause errors and trying to find holes. That one is just a 404 page not found where some crawler is looking for a role service that "may" exist in some .NET installations.
Its guessing the url from something that does exist in the source of the page and trying to crawl it.
<script type="text/javascript">
//<![CDATA[
Sys.Services._AuthenticationService.DefaultWebServicePath = 'Authentication_JSON_AppService.axd';
Sys.Services.AuthenticationService._setAuthenticated(true);
Sys.Services._RoleService.DefaultWebServicePath = 'Role_JSON_AppService.axd';
//]]>
</script>
This is some javascript that gets added to the page because we do have a role service and an authentication service but we do no use the default path so it does not find it. Its really not used for anything other than to authenticate using a Silverlight app I'm working on that is just in the prototype phase. You can comment out these services in Web.config and then that javascript should go away, though its not really hurting anything. In this case, I just looked up the ip and that ip address is for google so they must be crawling based on that javscript it sees in the page as well. Though its strange since our robots.txt file tells legit crawlers not to crawl the .axd extension.
Hope it helps,
Joe