I came across an unusual problem about a week ago which has probably been an issue on this particular computer for a long time without me realising it as I have one other application on there which uses a large amount of memory if left open for a long time, so I have a routine to restart that application once every couple of weeks.
Unfortunately such a thing isn’t really possible with an antivirus program without restarting the computer, and this particular system needs to stay running as much as possible. Restarting it is something which I can only really schedule in a couple brief windows each week without causing other issues.
This particular system is running Windows 7 so it is well and truly out of date in terms of Windows support, and Eset doesn’t release new software versions for it but does continue to provide antivirus definitions for it. Updating it to Windows 10 or Windows 11 isn’t really an option due to some of the software running on it being antiquated. One piece of software in particular has been discontinued and replaced by a different product which doesn’t quite work the same way and isn’t suitable for my purposes any more. I don’t know if installing and activating the old software on a new system would even be workable and the last time I had an interaction with support for that software, it left me less than certain that they knew much about how it worked at all. So I’m left needing to continue to run a Windows 7 system. While this does present some security risks, if properly managed these can be largely mitigated.
So, back to the problem I started to encounter with Eset Antivirus.
The other week I was investigating some performance issues with the machine and noticed one component of Eset was using over 900MB of RAM. This was unusual as it has never really left double digits before in my observations. The process in question was eguiProxy.exe. This process acts as a bridge between the Eset program window and the backend processes, allowing the Eset window to get and display information about the Eset Antivirus status and allow the user to start scans etc without needing administrative privileges. The eguiProxy.exe process is supposed to close shortly after the Eset window is closed, but a bug in some version 16 installations causes eguiProxy.exe to not close (and in some cases to run even if the Eset window was never opened) and to instead just sit there and eat up RAM indefinitely. Sometimes it will close after a few days, while on other occasions it just sits there until it is using up as much memory as the system will allow and starting to cause issues for other processes.
Memory usage after a few days of uptime
I had to restart this machine at a not-at-all optimal time to clear the excess memory usage and allow the rest of the system’s software to function normally, and due to the timing of this also had to manually fix a handful of processes which were interrupted or failed.
I was running Eset Antivirus 16.0.26.0
Eset released an update to version 16.0.28.0 to solve this issue, however in most cases the Eset application does not automatically update to this version and instead requires a manual update. As you can see in the above screenshot, Eset thinks it is up to date despite being on version 16.0.26.0 and not 16.0.28.0.
Hopefully the file remains in that location, however Eset’s websites have a habit of pages moving around quite a lot, so I have decided to mirror the file myself. It can be downloaded here. As it is a signed file, you can verify its authenticity by whether Windows accepts the signing to be valid. Regardless, if you can find the file on Eset’s websites, it is better to get it from there than from a random website on the internet such as mine, but I provide the download just in case you can’t find it elsewhere.
What isn’t clearly explained on the Eset website and doesn’t become apparent until you try to install it is that to install it, the system must be running at least Windows 7 SP1 with two specific updates installed, KB4474419 and KB4490628. If you try to install the Eset update without those Windows updates, it will refuse to install and send you to a series of Eset pages which provide a mishmash of information about whether or not you can install the Eset update.
As Microsoft stopped providing updates for Windows 7 some time ago, I disabled the Windows Update service (you can do that in Windows 7 – it’s a pain in Windows 10 and later, but I documented a method for doing so a few months ago) as I found it was often using excessive CPU to check for updates which were never coming. In order to install those updates, the Windows Update service must be enabled, so I had to re-enable it temporarily.
It turned out I already had SP1 and KB4474419 installed, and just had to install KB4490628. Once I did that, Eset version 16.0.28.0 was happy to install. The installer requests a login to Eset Home but this is not necessary. If Eset Antivirus is already activated, once installed the new version will recognise that, but if it isn’t activated you can always active or login after installation.
So now I have version 16.0.28.0 installed
And pleasingly the eguiProxy.exe process now only opens if the Eset window is opened, and closes shortly after the window is closed, no longer draining memory until Windows is left exasperated at the diminished resource.
In recent months I have received occasional correspondence informing me that someone has registered on this blog but not received the automated email confirming the registration and allowing them to set a password. I had thought it was a bug which had crept in to the WordPress installation over numerous upgrades over the course of nearly two decades, but it seemed strange that I was still receiving emails from the blog without issue informing me of new comments and user registrations. I wasn’t quite able to put my finger on what was going on.
The other day I received another one of those messages and was pleased to receive it within a few hours of the person registering as it meant all of the relevant logs would still be fresh, so I set about investigating. The WordPress installation doesn’t keep logs of the emails it sends (although it might generate an error which would be logged by the webserver if it encountered an error while trying to send an email) but the server itself does keep logs of email sending activity, so I had a look there.
I could see from this log that as far as the webserver’s internal email server was concerned, the email to me advising of a new registration as well as the email to the new user were both generated and sent. This immediately ruled out WordPress as the problem as it had clearly generated and sent both emails. The problem had to be further down the line.
Now, I should explain that email for the samuelgordonstewart.com is not hosted on the same server as the website, however the server for the website has an email server so that it can send emails. This server can also be used to receive emails but for me at least, it is not used for receiving. Like many website, this site is hosted on a shared server containing many completely unrelated websites. Each of those websites could generate and send emails, and for the hosting company there is always a risk that an insecure script on someone’s website could be exploited and be used to send out spam, which would have the impact of putting an unnecessary strain on the server’s resources as well as potentially getting the server blacklisted by a bunch of spam filtering services, affecting all of the websites on the server, not just the website generating the spam. To mitigate this risk, my webhost, VentraIP, employs an outbound spam filter. Emails from this server and many other servers in their fleet are funnelled through the outbound spam filtering before being sent on to wherever they’re intended to go. This outbound filtering isn’t particularly vigorous, but just enough to avoid having one of their servers send out copious amounts of obvious spam.
Unfortunately this makes the server’s log’s indication that the email was accepted by the receiving server to not mean much, as all it is really saying is that the outbound spam filtering server accepted it. Beyond that, what happened to the email can’t be determined from this log.
At this point I could have asked my webhost to check the spam filter logs to see what happened and see if Gmail’s servers accepted the email, and while that might have provided some information, it probably wouldn’t have told me much, and there was more I could investigate first. There were two clues in the logs I already had. Firstly, the receiving mail server “out.smarthost.mxs.au” was not one I was familiar with, and secondly the ultimate destination was supposed to be Gmail which has some fairly strict sender verification checking as part of its spam filtering.
One of the first lines of defence against spam is a domain name’s SPF record. The main purpose of this record is to determine which servers are allowed to send email on behalf of the domain. A few months back I made a change to one character in the SPF record of samuelgordonstewart.com. At the end of the record I changed
~all
to
-all
This had the effect of changing the policy of the SPF record from “servers which aren’t explicitly allowed to send email for this domain might still be OK to send such emails” to “only email from servers which are explicitly allowed to send emails for this domain should be accepted, everything else should be rejected”.
I changed this rule at the time because (1) I should have done so a long time ago, and (2) I had noticed I was receiving spam allegedly from my domain but which had clearly come from servers with no connection to my domain whatsoever and I wanted to stop this from happening.
Going back to the logs, the fact I didn’t recognise “out.smarthost.mxs.au” as a server which had been doing filtering for my webhost made me wonder if it was not present in my domain’s SPF record and emails going through it might have been getting rejected by Gmail.
To cut a long story short on this, the answer was yes. At some stage my webhost had changed how they organised their outbound filtering, and my SPF record had become outdated as a result. The DNS records which host the SPF record are in fact hosted by my webhost, so in theory they could have updated this automatically and for many of their clients they almost certainly did, however as I had made a number of custom changes to my DNS records including my SPF record over the years, it was probably beyond the scope of their automated systems to make this change for me. In fact, the way my SPF record was configured, their automated system could have drawn the inference that I didn’t want their outbound filtering to be allowed to handle mail from my domain and thus adding such a record would have been inappropriate.
My SPF record was
v=spf1 ip4:103.42.110.11 +a +mx +include:spf.hostedmail.net.au +include:spf.messagingengine.com -all
Effectively what this said was that I was permitting mail to be sent by the server at 103.42.110.11 (the IP address of the server hosting the website), any server listed in the domain’s A records (this rule basically duplicates the first rule but allows the IP address of the server to be changed without me having to manually add the new IP address in), any server listed in the MX records (the servers which receive email for the domain) plus any servers specified by the records of spf.hostedmail.net.au and spf.messagingengine.com.
spf.hostedmail.net.au had previously included the outbound filtering of my webhost. This record belongs to my webhost’s separate email hosting service which I used to use. I believe it shared outbound filtering with their webservers, but apparently doesn’t any more.
spf.messagingengine.com belongs to Fastmail which is my current email host.
When I checked the the SPF record of another domain I have hosted by VentraIP I noticed it contained a different server: spf.hostingplatform.net.au, which is indeed the record for my webhost’s outbound spam filtering.
So I adjusted my SPF record to include this:
(I can probably remove the spf.hostedmail.net.au as it is no longer needed, but one change at a time…)
Then I registered a new account on this blog using the email address of a Gmail account I have access to. I don’t have a personal account at Gmail and haven’t for a very long time…in fact I probably wouldn’t have any account with Google at all if it wasn’t for the fact I have to have an account with them for YouTube. Email contains an awful lot of sensitive information about a person and I’d rather pay to have my email hosted somewhere where I can be confident it’s not getting scanned for advertising targeting or profiling purposes. Anyway, the registration email went through…it landed in the spam folder and Gmail noted the email looked very similar to emails it had previously rejected, but at least it got delivered and wasn’t silently blocked. I was then able to mark it as “not spam” to help train their filters and hopefully with time Google will start to recognise that emails from my blog are legitimate again.
What’s interesting about all of this is that various email services and spam filters have differing ways of handling spam and interpreting things. In this instance, I was receiving emails from my blog at Fastmail without any issue but Gmail was blocking them completely. So it seems that Fastmail and Gmail have different ways of deciding which server is the sender of the email, and although I pay Fastmail for my email service and am quite happy with them, frankly I think Gmail has the correct interpretation here.
Every email you send or receive is basically just a big heap of text. There’s a lot of text you don’t normally see in the “headers” with information about where the email is from and where it has been, and attachments are encoded as text which looks like pages and pages and pages of gibberish.
A portion of the headers of an email sent to me by this blog advising me of a new user registration
The headers contain information about every server which handles the email along the way, including the time the server received the message and where it received it from. Email servers often add other information as well such as any spam filtering checks they did, or in the case of an email server on a webserver, which account on the webserver generated the email. Ultimately this is just text and there’s no way for a mail server further down the chain to verify any of the information added at an earlier stage. The only information which a mail server can be sure of is the address of the the server or device which it received the message from, and any information the server adds itself.
Fastmail seems to be accepting that email might get routed via another server but as long as the headers list an authorised server as the originating source of the message, the email should be let through. Whereas Gmail is much more strict and will reject an email if the server it receives the message from isn’t an authorised server for that domain, regardless of what is listed in the headers.
Given it is impossible to verify details listed in the headers by previous servers in the chain, it is possible to fake a portion of the headers of an email, and a sufficiently sophisticated spam operation would be wise to do just that in order to make it look like the ultimate source of the email is authorised. In fact I have no doubt some spam operations do just that.
SPF isn’t the be-all-and-end-all of spam filtering by a long way, but it’s an important first step, and while I know Fastmail is used to receiving email from my webserver and knows it’s not spam, the fact that it seems to let perceived reputation and unverifiable header text cloud the judgment of its spam filtering is a concern. I can see merit in sending such emails to the spam folder rather than Gmail’s policy of flat out silent rejection and deletion, and if Fastmail had been doing that then I would have picked up on the issue with the SPF record not listing the correct outbound filtering servers sooner as the headers inserted by Fastmail’s spam filters would have provided that information, but ultimately I think Gmail’s policy of treating the server which sent it the message as the sender to be checked against SPF is the correct methodology, even if I think some of those emails could be put in the spam folder rather than being silently deleted.
Fastmail’s spam filtering is not proprietary to them. Some aspects of it might be but it is built on systems widely used elsewhere for spam filtering, so one has to wonder how many of the spam filters in use by email servers right around the world have an overly permissive approach to SPF records and are willing to take the word of header text which may be completely illegitimate with no way of being checked. Too many, I fear.
Something I enjoy doing is playing with older computer systems or getting virtual or emulated versions of older computer systems running on modern machines, and using that to play with older games and software. This is especially enjoyable when I can get some of my favourite childhood games to run.
In today’s video I go through some of the systems and games I have running, and well…while it’s fun, I’m not actually all that good at computer golf!
Also here is the wonderful free DOS-based cash register program by Dale Harris which Dale is still maintaining, and some fun with internet browsing which doesn’t quite work in older browsers.
One of the benefits which we are now starting to see from switching off analogue television is that radio frequencies previously needed for television can now be used for other purposes, such as expanding the amount of bandwidth available to mobile phone providers.
Telstra, Optus, and TPG have all bought some of this recently-vacated space on the 700MHz band, with Telstra and Optus switching on their new frequencies on January 1. Unfortunately, as with most of these launches of higher-speed mobile technologies, different carriers are implementing it differently, which means differing speeds and compatibilities among carriers. Regardless of that, 700MHz offers better range and building penetration than the common existing frequencies, and thus should improve coverage and reliability for people with phones which support it.
Telstra are using a system which increases speeds by having customers’ phones use both the 1800MHz and 700MHz frequencies concurrently. Very few phones currently support this (the short version being that if you own a phone and aren’t sure if it supports it, it probably doesn’t…you’d almost certainly know if it did). The speeds on offer are quite impressive though, with 150Mbps on the download side and 40Mbps on the upload side (that’s megabits per second, just like the speeds advertised for wired internet connections…divide it by eight to see megabytes per seconds).
A handful more phones (including iPhone 6 and Samsung Galaxy S5) support using the 700MHz frequency without coupling it to another frequency, and for them speeds of 80Mbps for downloads and 40Mbps for uploads under good conditions are reasonable.
Optus are using the latter option of using 700MHz on its own and thus their best speeds are more compatible with more phones, but unfortunately they have rolled this out to less places than Telstra at this stage.
The good news for those of us on phones which do not support the new frequency is that both Telstra and Optus are upgrading the backhaul networks to cope with the greater promised speeds, and this means greater capacity even on the older 3G and 4G frequencies, which should improve speeds to some extent on these older technologies, especially in places which become quite congested.
As far as coverage for the new 700MHz networks go, the basic rule of thumb is capital cities are covered, and major regional centres are covered. Telstra have gone to some length to spell out which areas are covered by them, while Optus have been a bit less forthcoming, probably so as to avoid a press release from Telstra pointing out which locations covered by Telstra are not covered by Optus.
Optus’ release quotes David Epstein, Vice President, Corporate and Regulatory Affairs at Optus stating that “We are improving our 4G network today with 700MHz in parts of the Sydney CBD, Chatswood and Eastern Suburbs; Brisbane CBD, the Gold and Sunshine Coasts; Adelaide CBD, Melbourne CBD, Geelong, Frankston and Mornington Peninsula; plus Hobart CBD, Perth CBD, Claremont and Cottlesloe. Whether you are in Armidale or Sydney in New South Wales, Townsville or Brisbane in Queensland, Ceduna or Adelaide in South Australia, or Wangaratta or Melbourne in Victoria, with the right device Optus 4G will have you covered as our network expands”
Telstra’s 700MHz coverage, again courtesy of Gizmodo (although I should note this list indicates which towns were to receive coverage as of January 2, and while it probably includes towns which received it on January 1, I can’t be entirely sure that it does)
NSW Cessnock: Loxford Cooma: Polo Flat Dubbo: Dubbo Dungog: Dungog Forster Tuncurry: Tuncurry Maitland: Windermere, Aberglasslyn, Anambah, Bolwarra, Gosforth, Lorn, Melville, Mount Dee, Oakhampton, Rutherford, Telarah, Windella, Horseshoe Bend, Morpeth, Oswald, Raworth, Beresfield, Metford, Pitnacree, Maitland, South Maitland Milton Ulladulla: Mollymook, Mollymook Beach Mittagong: Aylmerton, Braemar Narellan: Oran Park Newcastle: The Junction, Wickham, Georgetown, The Hill, Bar Beach, Hamilton East, Tarro, Cooks Hill, Broadmeadow, Hamilton, Tighes Hill, Hamilton North, Islington, Maryville, Mayfield East, Waratah, Newcastle CBD, Hamilton South, Newcastle West, Mayfield North, Newcastle East, Stockton Queanbeyan: Queanbeyan West Shoalhaven: Shoalhaven Heads Singleton: Hambledon Hill, Gouldsville, Mount Thorley Sydney: Alexandria, Barangaroo, Darlinghurst, Dawes Point, Eveleigh, Forest Lodge, Haymarket, Millers Point, Pyrmont, Rosebery, Ultimo, Edgecliff, McGraths Hill, Pitt Town, Pitt Town Bottoms, Vineyard, Glebe, Dulwich Hill, Birchgrove, Double Bay, Lewisham, Lidcombe, Newtown, Petersham, Rozelle, St Peters, Stanmore, Sydenham, Ashcroft, Cartwright, Hammondville, Hoxton Park, Lurnea, Macquarie Links, Miller, Sadleir, Wattle Grove, Annandale, Clontarf, Cremorne, Cremorne Point, Mosman, Cammeray, Mount Druitt, North St Marys, Rooty Hill, Tregear, Whalan, Oxley Park, Artarmon, Crows Nest, Greenwich, Lavender Bay, McMahons Point, Naremburn, Neutral Bay, North Sydney, Northwood, St Leonards, Waverton, Willoughby, Wollstonecraft, Woolwich, Auburn, Camellia, Constitution Hill, Granville, Harris Park, Holroyd, Mays Hill, Merrylands, North Parramatta, Oatlands, Parramatta, Pemulwuy, Pendle Hill, Rosehill, Rydalmere, South Granville, South Wentworthville, Telopea, Westmead, Daceyville, Eastlakes, Kensington, Caringbah South, Maianbar, Yowie Bay, Watsons Bay, Waverley, Woollahra, Woolloomooloo, Bellevue Hill, Bondi Beach, Bondi Junction, Bronte, Centennial Park, Darling Point, Elizabeth Bay, Moore Park, North Bondi, Paddington, Point Piper, Potts Point, Queens Park, Redfern, Rose Bay, Tamarama, Vaucluse Tamworth: Gidley, Taminda, Wallamore Tweed: Pumpenbil Wollongong: Port Kembla
QLD Ayr: Home Hill Brisbane: Browns Plains, Heritage Park, Meadowbrook, Munruben, Park Ridge, Park Ridge South, Regents Park, Shailer Park, South Brisbane, Ashgrove, Auchenflower, Boondall, Camp Mountain, Chelmer, Clayfield, Draper, Eagle Farm, Ferny Grove, Fitzgibbon, Fortitude Valley, Gaythorne, Gordon Park, Grange, Hamilton, Hendra, Herston, Indooroopilly, Lutwyche, Margate, Milton, Mitchelton, Newmarket, Newstead, Northgate, Paddington, Petrie Terrace, Pinkenba, Redcliffe, Samford Valley, Samford Village, Spring Hill, St Lucia, Taigum, Taringa, Toowong, Wights Mountain, Wilston, Windsor, Wooloowin, Zillmere, Annerley, Dutton Park, Fairfield, Highgate Hill, Tennyson, Woolloongabba, Yeerongpilly, Yeronga, Alexandra, Kangaroo Point, Craignish, Nundah, Karragarra Island, Lamb Island, Macleay Island, Balmoral, Bulimba, Coorparoo, East Brisbane, Greenslopes, Hawthorne, Morningside, Norman Park, Oxley, Seventeen Mile Rocks, Sinnamon Park, Cordina, Graceville Bundaberg: Kensington, Rubyanna Central Queensland: Tieri Gladstone: Barney Point, Gladstone Central Gold Coast: Broadbeach, Broadbeach Waters, Mermaid Beach, Mermaid Waters, Maclean, Surfers Paradise, Wilsons Plains Goondi: Goondi, Goondi Bend, Goondi Hill Gympie: Victory Heights, Banks Pocket, Araluen Hervey Bay: Beelbi Creek, Dundowran, Eli Waters, Toogoom Innisfail: Belvedere, Cullinane, Hudson, Mighell, Mundoo, O’Briens Hill, Coolana, Harrisville, Lowood, Rifle Range, Tarampa, Wivenhoe Pocket Mackay: Beaconsfield Mt Isa: Happy Valley, Healy, Kalkadoon, Lanskey, Menzies, Mica Creek, Miles End, Mornington, Parkside, Pioneer, Ryan, Soldiers Hill, Sunset,Town View, Winston Rockhampton: Bangalee, Berserker, Frenchville, Koongal, Lammermoor, Park Avenue, Wandal Sunshine Coast: Alexandra Headland, Sunshine Coast Regional Districts, Twin Waters, Minyama, Mountain Creek, Buddina, Marcoola ,Pacific Paradise, Point Arkwright, Valdora, Parrearra, Maroochydore, Mooloolaba, West Woombye, Sunrise Beach Toowoomba: Blue Mountain Heights, College View, Crowley Vale, East Toowoomba, Lawes, Mount Kynoch, Postmans Ridge, Prince Henry Heights, Rangeville, Redwood, Rockville, South Toowoomba, Spring Bluff, Toowoomba, Withcott Townsville: Castle Hill, Cluden, Condon, Gulliver, Heatley, Kirwan, Mount Louisa, Rasmussen, Thuringowa Central, Vincent
VIC Albury Wodonga: Albury, East Albury, Lavington, North Albury, West Albury, South Albury Ballarat: Alfredton, Bakery Hill, Ballarat, Ballarat East, Ballarat North, Black Hill, Bonshaw, Cambrian Hill, Canadian, Delacombe, Eureka, Golden Point, Invermay Park, Lake Gardens, Lake Wendouree, Magpie, Mount Clear, Mount Pleasant, Newington, Redan, Sebastopol, Soldiers Hill, Wendouree Bendigo: Flora Hill, Golden Gully, Golden Square, Kangaroo Flat, North Bendigo, Quarry Hill, Spring Gully Berwick: Cora Lynn, Garfield, Tynong, Vervale, Burnewang Campaspe: Carag Carag, Colbinabbin, Corop Castlemaine: Harcourt Eastern Melbourne: Derrimut Geelong: Bell Park, Belmont, Breakwater, Drumcondra, East Geelong, Geelong CBD, Geelong West, Manifolds Heights, Marshall, Moolap, Newcomb, Norlane, North Geelong, Rippleside, South Geelong, St Albans Park, Whittington Hamilton: Mortlake Kyneton: Woodend North Melbourne: Narre Warren North, Ardeer, Albert Park, Balaclava, Caulfield North, Elsternwick, Elwood, Middle Park, Port Melbourne, Ripponlea, South Melbourne, St Kilda, St Kilda East, Bangholme, Frankston, Skye, Newport, Wandin North, Footscray, Seddon, Spotswood, West Footscray, Yarraville, Kingsville, South Kingsville ,Williamstown North, Abbotsford, Carlton, Carlton North, Clifton Hill, Collingwood, East Melbourne, Fitzroy, Fitzroy North, Parkville, Princes Hill, Richmond, Southbank, West Melbourne, Aberfeldie, Ascot Vale, Flemington, Kensington, Moonee Ponds, Travancore, Braeside, Melbourne Airport, Aspendale Gardens, Bonbeach, Chelsea, Chelsea Heights, Edithvale, Waterways, Brunswick, Brunswick East, Brunswick West, Bittern, Boneo, Crib Point, McCrae, Merricks Beach, Rosebud, Rosebud West, Sorrento, Heatherton, Moorabbin Airport, Alphington, Fairfield, Northcote, Kew, Hawthorn, Hawthorn East, Armadale, Burnley, Kooyong, Malvern, Prahran, South Yarra, Toorak, Windsor, Albion, Cairnlea, Clarinda, Mernda
TAS Hobart: Austins Ferry, Barretta, Battery Point, Bellerive, Chigwell, Claremont, Dennes Point, Dowsing Point, Dynnyrne, Electrona, Flowerpot, Gagebrook, Glenorchy, Howden, Howrah, Huntingfield, Killora, Lawitta, Leslie Vale, Lindisfarne, Montrose, Mornington, Mount Nelson, Mount Stuart, New Norfolk, , Oakdowns, Old Beach, Opossum Bay, Otago, Rosetta, Rosny, Rosny Park, Sandy Bay, Tinderbox, Tolmans Hill, Tranmere, West Moonah Launceston: Blackwall, East Launceston, Invermay, Launceston, Mayfield, Mowbray, Newnham, Newstead, Norwood, Prospect, Prospect Vale, Ravenswood, South Launceston, Youngtown Devonport: Ambleside, Miandetta, South Spreyton, Spreyton, Tarleton
NT Alice Springs: Alice Springs Darwin: Bakewell, Bayview, Bellamack, Coolalinga, Darwin International Airport, Driver, Durack, East Side, Fannie Bay, Gray, Hughes, Larrakeyah, Leanyer, Muirhead, Parap, Pinelands, Sadadeen, Stuart Park, The Gap, The Gardens, Tivendale, Uralla, Winnellie, Wishart, Wulagi
SA Adelaide: Collinswood, Gilberton, Walkerville, St Morris, Trinity Gardens, Evandale, Marden, Glynde, Felixstow, Payneham, Payneham South, Firle, Tranmere, Magill, Wayville, Everard Park, Black Forest, Frewville, Parkside, Eastwood, Glenunga, Toorak Gardens, Glenside, Linden Park, Stonyfell, Beaumont, Rose Park, Beulah Park, Kent Town, Heathpool, Kensington, College Park, Hackney, Joslin, Royston Park, Auldana, Rosslyn Park, Dulwich, St Peters, Clarence Park, Ashford, Glandore, Kurralta Park, North Plympton, Plympton, Mitcham, Lynton, Torrens Park, Para Hills West, Parafield, Evanston, Evanston Gardens, Evanston Park, Elizabeth, North Adelaide, Elizabeth East, Para Hills, Glanville, Birkenhead, Peterhead, Exeter, Moana, Seaford Rise, Sellicks Beach, Hindmarsh, Thebarton, Torrensville Coober Pedy: Thevenard Hamley: Hamley Murray Bridge: Mobilong Port Lincoln: Hawson Port Pirie: Port Pirie South, Risdon Park South Riverlands: Golden Heights, Holder, Ramco, Ramco Heights, Waikerie The Barossa: Gawler West, Reid, Tanunda, Bethany, Vine Vale, Light Pass Whyalla: Whyalla Playford, Mullaquana, Whyalla Norrie, Kimba Yorke Peninsula: Kooroona, Moonta, Moonta Bay, North Moonta, Port Hughes
WA Albany: Centennial Park, Frenchman Bay, Lange, Lockyer, Milpara, Mira Mar, Orana, Walmsley, Collingwood Heights, Spencer Park, Yakama, Vancouver Peninsula Busselton: Geographe, Reinscourt Forrestdale: Forrestdale Kalgoorlie: Somerville, South Kalgoorlie, Kalgoorlie, Piccadilly, West Lamington, Boulder, Victory Heights Mandurah: Parklands, Greenfields, Coodanup, Dudley Park Perth: Gingin, Maylands, Bedford, Inglewood, Mt Hawthorn, Highgate, East Perth, North Perth, Coolbinia, Menora, Mt Lawley, Glendalough, Osborne Park, Herdsman, Churchlands, Tuart Hill, Joondanna, Yokine, West Perth, Kings Park, West Leederville, Leederville, Shenton Park, Daglish, Crawley, Nedlands, Claremont, Mt Claremont, Karrakatta, Mount Clarence, Wembley, Jolimont Perth South: Oldbury, Applecross, Mt Pleasant, Casuarina, Mandogalup, Postans, Wandi, Anketell, The Spectacles Southern Perth: Lathlain, Victoria Park, Burswood, East Victoria Park, Rivervale, Redcliffe, Ascot, ,Kensington, Como, Karawara
I suppose we can be thankful that the problems with analogue television were a characteristic of the technology and not the frequency, because I dare say not many people would be very excited about receive high-speed fuzzy, ghosting Internet plagued with static and stuck in a 4:3 aspect ratio…although it would be fun to see one of the carriers thrust such a thing upon customers for a few hours on April Fools’ Day.
At this point in time, this blog is running a theme which it has been running since I first moved it to a WordPress installation in 2005. I have modified the theme a little bit over the years to make the colours more suit tastes and adjust a few functions to work a bit better for my needs. All that said, it is an ancient set of PHP scripts designed to work under an equally ancient version of PHP, and given how much has changed in PHP over the years it could be considered a miracle that the Blix theme still works at all.
Recently I came across a problem which I couldn’t make heads or tails of. Out of the blue, errors started appearing on this blog about failed login attempts where a script on this blog had tried to login to the database as the chief administrative user without a password:
Warning: mysql_query(): Access denied for user ‘root’@’localhost’ (using password: NO) in /home/samuelgo/wp-content/themes/blix/BX_functions.php on line 44
Warning: mysql_query(): A link to the server could not be established in /home/samuelgo/public_html/wp-content/themes/blix/BX_functions.php on line 44
Warning: mysql_num_rows() expects parameter 1 to be resource, boolean given in /home/samuelgo/public_html/wp-content/themes/blix/BX_functions.php on line 45
I first noticed this problem just after I fixed an issue where the WordPress installation couldn’t login to MySQL and thought it was related and possibly a sign of a very poorly implemented hacking of this blog, but it turned out to be unrelated. The reason WordPress was unable to login to the database was that this site was moved from one server to another and a file with login details was slightly corrupted in the process, and this was easily corrected.
The above errors are related to a function which, on the Archives page, shows the number of comments on each post. Previously the “mysql_query” function in the BX_functions file used the login details which WordPress uses, but the “mysql” function has been deprecated as of PHP 5.5 and, while it still works, now seems to need to be explicitly told which login details to use, or it just assumes it should login as “root” without a password (which is possibly the dumbest login details I can think of as it has almost no chance of working on anything but the most insecure of servers). This error, consequently, appears next to each and every post on the archives page, often multiple times. I was able to suppress these errors from displaying until I could get around to figuring out what the problem was, but this still caused the error to be dumped in to an error log many thousands of times per day, causing the error log to grow by hundreds of megabytes each day until I would delete it before it could use up all of the disk space available to this website.
A Google search for the error message shows a lot of blogs running the Blix theme and related themes, but no solution to the problem, so now that I have figured out a workaround, I’ll post it here for the benefit of everyone.
The solution is a tad cumbersome in that it requires the WordPress database username and password to be added to the BX_functions.php file. In reality it is only a workaround as the “mysql” function has been deprecated in favour of other functions and, as such, it will probably exhibit increasingly bizarre behaviour in future version of PHP until support for it is completely removed. This solution works for now, but the only long-term solution is to change to a more modern WordPress theme…I’m still trying to find one that I like as much as Blix.
The solution is to edit your Blix theme’s BX_functions.php file. I would recommend making a backup copy of the file first. This file can usually be found in the /wp-content/themes/blix directory of your website, but if you have a version of Blix which is installed in a different location, then you’ll need to find BX_functions.php in whatever directory the theme is installed in.
You should see a section which looks like this, starting around line 43 (again this may vary if you have a customised version of Blix):
echo "<li>".get_archives_link($url, $text, '');
$comments = mysql_query("SELECT * FROM " . $wpdb->comments . " WHERE comment_post_ID=" . $arcresult2->ID);
$comments_count = mysql_num_rows($comments);
if ($arcresult2->comment_status == "open" OR $comments_count > 0) echo ' ('.$comments_count.')';
echo "</li>\n";
You will need to add a couple lines as follows below the line starting with “echo” and above the line starting with “$comments = mysql_query”.
Naturally you need to change these details so that
“server” is the address of your MySQL server (this is often “localhost”)
“username” is the username of your MySQL user
“password” is the password of your MySQL user
“databasename” is the name of the MySQL database used for your WordPress installation.
Do not remove the quotation marks from around these details.
If you’re not sure of any of these details, you should be able to find them in the wp-config.php file in the root directory of your WordPress installation.
Once you’re done, the above section of the BX_functions.php should look something like this:
echo "<li>".get_archives_link($url, $text, '');
mysql_connect("server", "username", "password");
mysql_select_db("databasename");
$comments = mysql_query("SELECT * FROM " . $wpdb->comments . " WHERE comment_post_ID=" . $arcresult2->ID);
$comments_count = mysql_num_rows($comments);
if ($arcresult2->comment_status == "open" OR $comments_count > 0) echo ' ('.$comments_count.')';
echo "</li>\n";
And the errors should go away.
It’s an annoying issue, but it’s nice to have a solution, even if it is only really a temporary workaround in lieu of upgrading to a theme designed for a modern version of PHP.
I had great fun visiting the studios of TWiT.tv (known as the TWiT Brick House) yesterday. I had all the photos ready to go for this blog post yesterday afternoon, but ironically ran in to a technical hurdle when I realised that there was some video as well. I’ll get to that shortly…but first…
The TWiT Brick House as seen from the other side of Keller St, Petaluma
The studios are located at 140 Keller St, Petaluma. TWiT’s wiki provides helpful directions, but it was easier to find than I expected. The building is quite distinctive on this street and the recommended parking garage which is listed on the site is about half a minute’s walk from the studios. I took a little longer than that to walk from my car to TWiT though as I took a detour to the other side of the road to take that photo.
I got there a little earlier than I had expected, a tad before 10am.
When I got inside, staff were discussing a lighting issue with some contractors, and accidentally turned off a bunch of lights in the studio in the process. Staff were busy, so I filled out the mandatory waiver and waited a few moments until they were less busy and could take me through. The studio portion of the building takes up a tad over half of the floor space, with other rooms taking up the other side of the building in an upside-down L shape with studio entrances behind reception next to the roundtable set, and another around the back near Leo’s office/set, and a kitchen and toilets. The place actually looks bigger to me in real life than it does on screen. It is quite an impressive setup.
Tech News Today with Mike Elgan was about to start when I took a seat.
Tech News Today with Mike Elgan being filmed on February 12, 2014
One thing which was impressed me was how little of this news program was scripted. Story introductions and some questions were scripted, but most of Mike’s questions were not scripted. I might just be a bit too used to Australian news formats where questions are generally scripted, so it was nice to see proof of an anchor who truly understands the subject matter.
Just off to the right of the set from the perspective of where I was seated is another set which is used for The Giz Wiz among other shows. The program feed which was going out for broadcast was visible on the main screen on this set.
And if I walked a little way down the Giz Wiz set and looked across where Mike Elgan was seated, Leo’s office/set can be seen through the window, and on this side of that glass is where his weekend show’s call screener Heather Hamann sits. At the far-left of the photo a large analog clock can be seen. This is on the back wall of the studio portion of the building, and is quite an attractive feature of that wall, but is sadly obscured by other objects in the wide shot of the studio used between shows on the live stream.
Throughout the filming of Tech News Today, I had wanted to get my digital SLR camera out, but alas I could not as opening the velcro pouch would make too much noise and I did not want to interrupt or interfere with the broadcast. So I waited until after the show finished, only to discover that it was a waste of time as it could not handle the large variations in light levels of different bits of the room and was either giving me good images of peripheral bits of the set with bright white people and random bright white objects, or it was giving me great images of the main focal points of the show, with almost black everywhere else. This might be fixable if I spent enough time playing with the camera’s settings, but I didn’t go to TWiT to play with my camera.
It was also interesting to note that for this show, the remote side of the conversation can be heard aloud without the need for headphones.
Shortly after this I proceeded to Leo’s office/set where he was preparing for Windows Weekly #349 with Paul Thurrott and Mary Jo Foley. Leo’s set is awesome to be a visitor in, as the guest seating is extremely comfortable and the wireless headphones are also very comfortable (even for someone like me for whom many headphones cause the frame of my glasses to dig in to my head).
I’ve never noticed the monitor on the front of Leo’s desk before (it’s never really in shot, presumably so as to avoid a visual loop effect) which makes it easy as a visitor to see how what is happening in front of you is being packaged for broadcast.
Over this side of the room, behind the visitor chairs, is a monitor following the TWiT.TV IRC chat session, and the line and preview monitors of the Tricaster vision switcher which is important as Leo switches his own shows when they are being produced from his office/set, whereas other shows are switched from a central control centre in the middle of the TWiT set. Two of the cameras are visible here (one for Leo’s solo shot, and the other for the “Leo plus Skype monitor” shot. On the other side of the glass is where Heather Hamman screens calls for Leo’s weekend radio show and also is the location of the set used by Tech News Today, and then on the far wall, a collection of hats which I was very happy to see for a reason I’ll explain in just a moment.
On this side of the set you can see another camera (the one which faces the window so that Heather Hamman can be on-camera) and at the top right of the bookcase is a dropcam producing a live feed on the internet at most hours.
After Windows Weekly finished, I presented Leo with some gifts. One was an Australia hat (Leo’s collection of hats pleased me as I knew then that I was giving a hat to a connoisseur of hats. I also gave Leo some Tim Tams, which led to Leo demonstrating his favourite way of eating a Tim Tam…biting off the ends and then drinking his coffee or tea through the Tim Tam as if it was a straw. I thought by this stage the live stream had switched to the next set (I had stopped paying attention to the monitors by this stage) and only later, to my pleasant surprise, realised that Leo’s Tim Tam demonstration, our little chat, and a quick photo shoot, had been broadcast.
We chatted about a few things including the time I had Leo on Samuel’s Persiflage, the top I was wearing (seeing as Leo has had some fun with the stories about the NSA spying on everyone and everything, I wore a hoodie with the message “The NSA: the only part of government that actually listens”…I also wore my Linux.Conf.Au 2005 t-shirt as it has a staged IRC session on the back of it which I thought Leo would enjoy, but I was having so much fun that I forgot to show him), and how interesting and mind-bending it is to get used driving on the other side of the road. The conversation was picked up to some degree at first by Leo’s studio microphone, and then later by an open mic in another part of the building. I left it all in the above video for posterity.
Now, for what is now a treasured item:
It was an honour and lots of fun to meet Leo and spend some time in the TWiT Brick House. As always, Leo went out of his way to make sure it was fun…while we had our photo taken he put on an Australian accent…I was too amused to remember exactly what he said but it certainly amused me.
One other mystery which was solved yesterday is the purpose of the symbol on Leo’s clock next to the top half of the final digit of the minutes. I’ve never watched in high definition so couldn’t identify it, but now I know it indicates the Pacific timezone, with the other US timezones not being illuminated.
I had a blast. A very big thank you to Leo and all of the TWiT.tv staff.
If you’re ever in the area, may I recommend Halli’s diner opposite the parking garage about half a minute’s walk away from the TWiT Brick House. Absolutely fantastic lunch and lovely staff. I will probably pop in to the diner again today as I would like to do some sightseeing around Petaluma today, and the old TWiT studio (TWiT Cottage) is a short distance from the current studio, and I would like to see it while remaining respectful of the privacy of the new occupants.
Now, that technical challenge I mentioned at the top.
How to download a particular portion of a long video from Justin.tv
One of the video streaming providers for TWiT, Justin.tv, temporarily keeps an archive of everything they stream (the archived video lasts a few days). While it is preferable to record the live video as it is a much simpler process, TWiT’s wiki also details how to download from Justin.tv’s archive.
The basic idea is that, using Firefox and an extension called Downloadhelper, you go to the Justin.tv video you want to watch and then tell Downloadhelper to download that file. The problem though is two-fold:
1) TWiT’s videos on Justin.tv run for many hours as they cover an entire day’s broadcasts and sometimes more (my clip, for example, was 52 hours in to the video).
2) This method only downloads the first half hour of the video.
The solution, until recently, was to mark a section of the video as a highlight, which gave it its own unique URL which Downloadhelper could use to download just that portion of the video. Alas the highlighting function was removed from Justin.tv about a week ago, meaning that downloading the first half hour of the video seemed to be the only option…so how do you make Downloadhelper download a half hour starting at a time of your choosing rather than the start of the video?
A clue comes in the way Justin.tv handles a request to move playout from the existing window to another separate window. It adds a string to the end of the URL to tell the new window at what point in the video to start (although the Downloadhelper plugin is not easily accessible from such a window, so simply opening a popout window at your chosen starting point is not going to work for this purpose).
Instead, open the video as normal and figure out what point you want to start downloading from. Then, work out how many seconds that is (in my case it was a little short of 186,960 seconds) and then add the following string to the end of the URL in the address bar:
/popout?playback_time=SECONDS
where “SECONDS” is replaced by the number of seconds.
So, for example, in my case the address of the video went from
http://www.justin.tv/twit/b/502307186
to
http://www.justin.tv/twit/b/502307186/popout?playback_time=186960
which allowed me to make Downloadhelper download 30 minutes of video from a starting point of my choice, and I was then able to edit the video to my required duration.
The National Broadband Network regularly posts on its website updated figures regarding the number of households to which the NBN is available. Michael Still (who, unlike me, is a proponent of the NBN) has been tracking these numbers and has found something odd…the NBN seemingly no longer reaches 24 houses in the ACT that it reached two months ago.
NBN rollout in the ACT December 2013 – January 2014. Image credit Michael Still
On the face of it, the numbers don’t make sense for two reasons:
1) As the project is built, it should continue to reach an increasing number of houses. If more houses were being knocked down in Canberra than being built, the decline might make sense, but then you’d have to wonder whether it would be better to prioritise the rollout in places which aren’t being demolished.
2) That amazing drop late week which mostly undid itself this week. The numbers are dodgy. Something is very wrong with the way they are being calculated.
This leads me to the inevitable question of “how many homes have actually been passed by the NBN?”. It’s possible that there was an overestimation of the number and they are now slowly auditing and correcting it, or it could be a more sinister and deliberate exaggeration of the numbers from before the federal election with a gradual correction of the numbers so as to not raise suspicions with a sudden drop.
This all leaves me wondering how much it is actually costing per house passed, and how much more over-budget this places the project than we already knew about. The NBN seems to be quickly devolving in to another TransACT rollout…over budget, behind schedule, unlikely to ever reach all of the people it initially said it would, and likely to risk leaving people high and dry if it collapses under its own weight or doesn’t get bailed out somehow.
The inescapable conclusion is that this should have been left to the private sector to do in a cost-effective manner in response to consumer demand. The rollout wouldn’t have been as quick (not that you could call the NBN’s rollout quick) and the speeds might not have been as high as offered by NBN Co. initially, but at least it would have been done in a responsible and commercially sustainable manner which didn’t require tens of billions of taxpayer dollars (perhaps close to $100 billion) at a time when the federal government really can’t afford it.
The US Government (and in particular the Obama administration) has suffered a major setback in an attempt to regulate the Internet and the way Internet Service Providers do business.
A federal appeals court on Tuesday struck down the Obama administration’s net-neutrality rules.
The D.C. Circuit Court of Appeals ruled that the Federal Communications Commission overstepped its authority by prohibiting Internet providers from blocking or discriminating against traffic to lawful websites.
[..]
The decision is blow to President Obama, who made net neutrality a campaign pledge in 2008, and erases one of the central accomplishments of former FCC Chairman Julius Genachowski, who pushed the “Open Internet” order.
The regulations were strongly backed by Internet companies like Google and Netflix, which fear that Internet providers will charge them more for the heavy use of their sites by customers.
On the winning side of the decision is Verizon, which filed the lawsuit, and other major telecom companies. They argued the rules created a huge regulatory burden while stifling innovation in the marketplace.
The reason this is a big deal is that Net Neutrality effectively prevented service providers (both retail and wholesale) from favouring certain websites and services over others, or from offering special deals to certain websites. It was promoted as providing equal access to everyone, and to an extent it may have achieved that aim if allowed to run its course, but could only have done so at the expense of a lot of competition.
A few examples for you. Suppose Google decide to build a new data centre in Alaska and decide that building their own fibre network infrastructure there is unnecessary because there is a already plenty of fibre up there being run by three competing service providers. Google request that all three providers give them quotes to join the new data centre to the existing networks…all of them do and Google negotiate with them and eventually come to an agreement with two of the providers, one of whom will provide the bulk of the bandwidth at a cheaper rate and the other provide a bit less bandwidth at a cheaper rate, while both will have the capability (for an extra fee) to provide all of the bandwidth if the other fails. The third carrier will not have direct connectivity to Google’s new data centre but will instead use one or both of the other carriers, and will probably already have an agreement with one or both of the other carriers for network access (and if they don’t, they can route via an interstate network…it will just be slower).
Under Net Neutrality laws, it would technically be illegal for one of those providers to offer Yahoo a better or worse deal than was offered to Google. It would also be illegal for one of those providers to sign an exclusivity deal with Google whereby they would not offer their services to Yahoo and would receive an extra fee from Google as compensation, while Google would not seek out the services of other local providers…such a deal would not prevent customers of the service provider from accessing Yahoo or prevent customers of other service providers from accessing Google, but it would mean that customs of the provider with the exclusivity deal with Google would have ever-so-slightly faster access to Google, and other local providers would send extra traffic to this provider when their customers access Google as the local data centre would be faster to access than any of the interstate ones, and this would generate extra revenue for the provider with the exclusivity agreement.
Effectively Net Neutrality destroys competition which means there is no reason for prices to come down. It would also mean that, rather than having lots of redundant and cheap connectivity paths from your computer to any website via your ISP, there would be a lot fewer and more expensive paths. Speeds would also not increase as much as, without competition, there is no reason for service providers to provision services ahead of demand. You would end up with a slower, more expensive, and less reliable Internet connection, and less choices of provider.
The other argument which was often used in favour of Net Neutrality was this one, also from the article linked above:
Tim Wu, a professor at Columbia Law School, said [FCC Chairman Tom] Wheeler “has to act.”
He pointed to the court’s decision to strike down the no-blocking rule, which he said will require FCC action. “It’s just a completely different world” if Internet providers are able to keep users from accessing certain websites and services, like Netflix, Skype and YouTube, Wu said.
Let’s go back to my original example above. To recap, there are three wholesale fibre providers. Providers A and B have direct access to Google’s new data centre. Provider C does not, but connects via both A and B. A, B and C all also run their own retail ISP and other retail ISPs in the area use one, two, or all of the fibre providers to connect their customers to the Internet-at-large. In turn, A, B, and C all have their own agreements with interstate and international network providers which overlap to some extent.
In Professor Wu’s understanding of a world without Net Neutrality, Provider C could decide to block all access from their network to Google because Google didn’t give them a contract to provide connectivity for the new data centre. While this is true, it ignores all of the market forces at work on the Internet. Yes, Provider C could do this, but why would they when almost every one of their retails customers would leave them and go elsewhere, and absolutely any ISPs who solely rely on them would quickly sign up with either A or B to regain connectivity to Google. If Provider C also hosts websites, well those websites will be moved to another provider as soon as their owners realise that Google can’t see them any more.
Provider C could block Google, but they go out of business very quickly. Instead Provider C would be wise to either reach a better agreement to Provider A or Provider B for access to the local Google data centre, or with one of their interstate providers for better access to an interstate data centre, or even attract some other well-known websites to the local area under their own exclusivity arrangement. A lack of Net Neutrality laws promotes innovation among services providers and a desire to find a way to make their offering better than others.
Another problem with Net Neutrality is that it prevents niche providers from providing services to meet the specific needs of specific markets. For example, opt-in Internet filters for families who would like their ISP to block non-child friendly sites; these become illegal if implemented at the ISP level (although I’m sure the FCC would see fit to exempt such a thing). Also illegal would be an ISP specialising in pre-filtered Internet access for schools, child care centres, and summer camps…especially if it blocks Skype and the search engines and replaces them with their own VOIP and search facilities.
There is a little bit of wriggle room in the court’s ruling which allows the FCC to continue to regulate the behaviour of Internet Service Providers and perhaps even reintroduce smaller portions of bits of Net Neutrality, but for now the Internet is back to being a place of market freedom where competition makes things better for everyone.
I’m in the middle of planning a trip (OK, closer to the start than the middle) to the US at the moment, and it occurs to me that my profile, plus my writings from earlier today, could just mean that a computer somewhere in the FBI or the CIA wants an agent to dig a little deeper.
From the perspective of a computer which has been programmed to look out for key words and phrases, this extract from my blog post about the postal system earlier today might seem a tad suspicious.
I would [..] embed some [..] devices in items I post
Yes, the statement was about tracking devices, and one would hope that an FBI agent would see that and dismiss the computer’s concerns, but I still think the computer would be worried about talk of posting devices and embedding things. The blog post also mentioned ricin, a poisonous substance which was mailed to the US President and a senator today, and so chatter about it would probably be high on the priority list for intelligence-gathering computers.
If I was putting together an automated system which looks out for suspicious activity of the terrorist kind, and was mainly basing it on key words and phrases, I would probably set it up so that after identifying something as potentially suspicious, it would then take another look over it for other, less immediately obvious, suspicious phrases which might indicate a plot or some sort of code. Looking back over that blog post, I listed my postal address in an unusual format:
a post office box at the Dickson post office (1272
And talked about the inside of government buildings:
They finally found it somewhere in the PO
and
parcels which are [..] stored in the post office’s back rooms
and
wandering back out to the back rooms
A drug inference could even be drawn from
Nattie did give the letter a good sniff
or possibly an explosives inference if the computer works out that Nattie is a dog.
Further examination of my blog brings up photos of phone towers, electricity substations, and a map of a powerline which feeds a government building.
Yes, an FBI computer would have good reason to think I’m suspicious. And a profiler might be concerned when they learn that my trip to the US is so that I can visit people, most of whom are conservatives (Terrorism center at West Point warns against danger of American limited-government activists and ‘far right’ – The Blaze, January 18), many are Christian, of which some are Catholic (Army training manual labeled Evangelicals and Catholics as religious extremists – Todd Starnes, Fox News Radio, April 5), and I intend on visiting many places in rapid succession, including some important building in Washington D.C. I have also made my disdain for President Obama clear on many occasions (although I think I’ve made it clear, and if I haven’t then I will now, that I do not want him to come to any harm…instead I wanted him to be voted out, and now want him to finish his term and be remembered for being a President with policies which ultimately failed and sparked a need for a serious return to conservative governing principles).
Obviously, this doesn’t add up to anything suspicious, but I can see how, at a time when security services are on edge, the combination of my profile and writings could be enough to make a computer suspicious, and perhaps make security services want to take a closer look at me. Dare I say it, I won’t be surprised if I get pulled aside at Customs in the US next year for a little chat…in fact, I’ll be a little disappointed if it doesn’t happen.
All of this reminds me of a story from the start of this year about the FBI scanning emails for certain words and phrases which apparently are common in messages about fraudulent activity. The words and phrases were “gray area”, “coverup”, “nobody will find out”, “do not volunteer information”, “write‑off”, “failed investment”, “off the books”, “they owe it to me”, “not ethical”, and “illegal”.
Glenn Beck had some fun with this on his radio show and jokingly suggested that they (Glenn or one of his co-hosts) should send an email containing all of those words just to confuse an FBI computer. Sure enough, co-host Pat Gray sent the message, and went to some lengths to make some of the phrases fit.
Dear Ahmed,
I’m sitting here gazing up at a cloudy grey area of the sky wondering how to cover up this blemish that I have on my nose. As a dermatologist, I thought you might have an idea of what I could use so nobody will find out that I’ve broken out again like a teenager. If you do not volunteer the information, I’ll probably have to see a specialist.
Up until yesterday, I’ve been using Clearasil on it but I realized that I can write off that failed investment of $4.99 because it didn’t work.
I wasn’t able to use the cream you prescribed for me last week because I put the jar on top of some books at my parents’ house and wouldn’t you know it, I bumped into the table that those books were sitting on and a jar fell off the books and onto the floor and broke.
My parents said that since I loaned them $20 last month, they would be happy to pay for a new prescription because they owe it to me. But I told them I wasn’t sure if it was not ethical to provide the medication again so soon.
Anyway, if you can call me on that, please call in the Walgreens at Fourth and Main as I have found that to get the one on 29th and Main, you have to make an illegal U‑turn at the light, and I don’t want to do that.
Thanks again. Whatever you can do, Dr. Ahmed.
I found it much more amusing when I heard it go to air. The video of it is embedded in the page of the above link, but it’s not working for me. Thankfully I have my own recording of it.
[audio:https://samuelgordonstewart.com/wp-content/GlennBeckFBILetter.mp3] Download MP3
(Audio credit: Glenn Beck, Mercury Radio Arts, Premiere Radio Network)
It seems that in areas of Central West NSW where 3G (not 4G) and ADSL1 (not ADSL2 or ADSL2+) services are all that is currently on offer, the National Broadband Network rollout will not occur until 2016.
Wasn’t the whole premise of the NBN that it would ensure that people in regional areas were to receive internet services which would be on-par with their counterparts in metropolitan areas? Wasn’t that the main reason behind the idea of spending $43 billion dollars? NBN Co’s Statement of Intent, which was tabled in Parliament on the 9th of October last year (more than three years after the project was started…one does have to wonder what took so long) certainly seems to think so:
Introduction
In the Statement of Expectations released on 20 December 2010 the Government expressed three central objectives for the National Broadband Network (NBN):
– To deliver significant improvement in broadband service quality to all Australians;
– To address the lack of high-speed broadband in Australia, particularly outside of metropolitan areas; and
– To reshape the telecommunications sector.
The NBN will enable high-speed broadband to be delivered to all Australian households, businesses and enterprises, through a combination of Fibre-To-The-Premise (FTTP), Fixed Wireless and Satellite technologies
(start of page 4, NBN Co. Statement of Corporate Intent 2012-2015).
$43 billion dollars with a primary goal of getting regional areas up to scratch, and it’s going to take until after the “Statement of Corporate Intent” expires to get Central West NSW completed, and that’s if it doesn’t get delayed even further!
The demand is there. If the NBN wasn’t preventing the private sector from building private infrastructure, this would be done by now, or at least be almost completed. This whole government-run scheme is an expensive shambles.
I’ve noticed some odd things with Facebook before where it seems to have known things about me that it should not have been able to know, mainly about people that I have communicated with in the past. On more than one occasion it has suggested a potential friend for me…someone with whom I corresponded via email once, many years ago, and with whom I have no mutual friends on Facebook. About the only way Facebook could know that I ever communicated with this person is if it had access to my Gmail account, or the other person’s email account. I certainly didn’t grant access…maybe the other person did. Either way, it was odd.
Today, something which would have required a little bit more research.
Of late Facebook has become more forthright with its suggestions of pages in which it thinks I might have an interest. For the most part this has been benign and been about something which I recently mentioned on Facebook…but that one there about Bundanoon is odd, very odd.
To the best of my knowledge I have never mentioned Bundanoon on Facebook. I have mentioned the nearby town of Moss Vale, but that was many months ago. I have mentioned Bundanoon on this blog before, but that was years ago in relation to a nutty move in that town to ban bottled water. Until today though, Facebook has never suggested that I “like” Bundanoon’s page, so what’s changed?
Well, I think that’s simple and a tad scary. Last night I wrote a blog post about a dream I had which, among many other things, involved flying to Bundanoon. This blog post, once published, was automatically linked to in a notice about a new blog post on my Twitter account, and that tweet was automatically cross-posted to my Facebook account.
It seems to me that the only way Facebook could have determined that Bundanoon might be something in which I am interested, would be that upon seeing the link to my blog post, a Facebook robot has scoured the blog post for terms which relate to Facebook pages. With this as the case, how many of my other links has Facebook scoured? And how much of a profile has it built up about me? Equally importantly, how accurate is that profile of me? Because it has to be noted that I do not only link to things with which I am in complete agreement.
Beyond Facebook, it makes me wonder who else is building detailed profiles of me, and why they are doing so. I suspect quite strongly that I probably would not like the answer…and many others would feel the same way about the profiles being built about them.
Maybe I’m over-thinking the situation…but I just can’t help but think that this is exactly the sort of thing which George Orwell was warning us all about.
Based on some feedback, I have decided that in order to make these Sunday Bits posts a bit easier to navigate, they will now contain a list of contents, and headers at the start of each section. I hope this makes it easier for you to read the bits that interest you, and skip the ones that don’t, rather than simply skipped the entire post due to a small section which doesn’t interest you.
In this edition:
* A prediction for tomorrow’s Labor leadership showdown
* The first radio ratings of 2012
* 2UE dumps their only weekday ratings winner of 2012
* Why telecommunication monopolies are bad
* A review (well, almost) of Tinker Tailor Soldier Spy
* Mount Majura in the fog
A prediction for tomorrow’s Labor leadership showdown
Tomorrow morning at about 11am we will know, one way or another, who will lead the Australian Labor Party for at least the next few days, and who will probably be sworn in as Prime Minister when Governor-General Quentin Bryce returns to the country on Thursday or Friday.
My prediction is that Julia Gillard will win, but not because she is a better leader. I expect her to win on the basis that the agreement with the independents and the Greens was made with her, and not with the Labor Party. Julia Gillard was very clever when she made sure that the agreement was made with herself and not the Party as it helps to secure her position as leader, a position which she would have known would, at some stage, come under threat due to the tenuous nature of minority government.
Electing anyone other than Julia Gillard as Labor leader potentially puts the agreement with the cross-benches under threat, and could potentially lead to a new general election. At this time, based on current opinion polling, Labor do not want to risk an election which is likely to see them annihilated.
For the record, I doubt that the Greens will ever back out of their effective coalition with Labor, as they really need Labor more than Labor need them, but the independents are another story as they might see disassociating themselves with the current disorganised mess as a way to secure their seats.
On the off chance that Kevin Rudd or some other as-yet unnamed contender takes over the Labor leadership, they have the advantage of having the Governor-General out of the country until at least Thursday, giving them time to negotiate to keep the independents and the Greens on-side…because it would be terribly embarrassing and destructive to themself and the Labor party to take over as Prime Minister and then immediately have an election called due to a no-confidence motion succeeding in the parliament.
Also, while it is true that a state governor could swear in a new Prime Minister in the absence of the Governor-General, I doubt that it will happen as a new Labor leader won’t mind waiting a few days to shore up the numbers.
On the whole, it wasn’t a great survey for commercial talk radio. In Sydney, while 2GB remains on top of the ratings by four whole percentage points, they did lose ground, losing 0.8 percentage points. 2UE went up by 0.3, mostly on the back of weekend ratings, but lost ground on most weekday shifts and remain a fair way down the ratings pile.
The biggest winner was Triple J which went up 2.7% to 7.4%. The biggest loser was 2DAY FM which went down 1.6% to 8.3%.
Last place belongs to ABC NewsRadio on 2.2%.
In Melbourne, 3AW remains on top but, like 2GB, took a bit of a hit. MTR lost ground in every timeslot, although it is worth noting that some of the survey period took place while MTR were taking extra programming from 2GB, so the next survey will give a better indication of how the local news cutbacks have affected MTR. Interesting, for the first time in a very long time (many years I believe), 3AW’s Neil Mitchell did not win his timeslot. He lost 3.5 percentage points in the morning timeslot, dropping from 15.7% to 12.2%, meaning that the local ABC station’s Jon Faine is now winning mornings on 13.7%.
The leaderboard in Melbourne:
3AW: 12.8%
ABC 702: 12.3%
Fox FM: 9.6%
Nova: 8.5%
Gold FM: 7.4%
The biggest winner was Nova which went up 1.5% to 8.5%. The biggest loser was shared between Fox FM and Melbourne’s 91.5FM which both went down 1.3%, Fox to 9.6% and 91.5FM to 2.9%.
Last place went to MTR1377 and ABC NewsRadio, both on 1.4%.
In Brisbane, 4BC bucked the trend for commercial talk stations, going up by 0.9 percentage points.
The biggest winner was 97.3 which turned a narrow lead in to a massive one by gaining 2.4% to sit on 14.1%. The biggest loser was Triple M which lost 1.7% to drop from 4th to 5th, drop out of double digits, and sit on 9.4%.
In last place, yet again, ABC NewsRadio on 1.5%.
In Adelaide, FiveAA lost ground but remained in second place. Of particular concern for FiveAA has to be their afternoon drive shift which lost a whopping 6.3% to drop from 1st place to 5th place.
The biggest winner was Triple J which gained 2.7% to sit on 8.3%. The biggest loser was Mix 102.3 which lost 2.3% to sit on 13.6%, retaining first place due to FiveAA also losing ground.
In Perth, 6PR lost ground as well, losing 1.2% overall and losing ground in every timeslot. Howard Sattler suffered the biggest loss, losing 3.4%.
The biggest winner was 96FM which went up by 2.4% to 11.8%. The biggest loser was 6PR which went down by 1.2% to 8.1%.
Last place went to ABC NewsRadio on 1.2%.
The one consistent thing across all of the surveyed cities is that NewsRadio is in last place. How thankful the NewsRadio staff must be that it is not a commercial operation, and doesn’t need to make money, because if it was, heads would roll and changes would be made. For the rest of us, who pay for NewsRadio through our taxes, what a shame it is that we are paying for a service that almost nobody listens to, when in other countries all-news formats have been made commercially viable…even without the advertising, NewsRadio could reach a much larger audience simply by making some changes that have been proven to work elsewhere, but as long as the tax dollars keep rolling in, there is no incentive to do so, as thus, they won’t.
***
2UE dumps their only weekday ratings winner of 2012
Back to Sydney we go, and 2UE’s perennial game of shuffles is on again. Sport’s Today, which was dumped at the beginning of last year, is back, albeit with two extra hosts. It reclaims its old 6pm-8pm timeslot, bumping Murray Olds and Murray Wilton who have shared the 6pm-9pm timeslot over the last year to mixed success.
The Two Murrays, combined with Mike Jeffreys until midnight (as the publicly available data goes from 7pm-midnight) lost 2%, the station’s largest loss. It seems quite bizarre then that The Two Murrays are being placed in to the weekday afternoon slot, formerly hosted by Michael Smith and recently hosted by Stuart Bocking since Smith’s axing, when Stuart Bocking delivered the station’s largest weekday gain of 0.6%. Even stranger, Stuart has been dropped from the schedule completely. He remains on the payroll though, as is expected to be retained as a fill-in host, but I think it’s safe to say that Stuart deserves better given his recent performance.
Sports Today starts tomorrow. It’s likely that Mike Jeffreys’ night program will start at the earlier time of 8pm. The Two Murrays start in their new timeslot in a week, so Stuart Bocking probably still has the coming week in the timeslot.
Meanwhile it is rumoured that David Oldfield might also succumb to the game of shuffles, to be replaced by a duo of Prue MacSween and Tracey Spicer. David Oldfield has failed to make a dent on rival Ray Hadley’s ratings, and I highly doubt that anyone can make significant inroads there, so I understand the move to an extent.
I don’t have access to demographic breakdowns of Ray Hadley’s ratings, so this is all somewhat informed conjecture based on the callers to Ray’s show, but I have always thought that Ray’s ratings primarily come from a male audience, and an older female audience. 2UE have clearly attempted to attract a younger audience, and I suspect that they have a shot at attracting a decent-sized 30 to 6o-year-old female audience with a duo of Prue MacSween and Tracey Spicer. This is a demographic which, to my ear at least, is dominated by FM music stations and possibly ABC 702, and as such lacks any strong commercial talk presence. Talk radio generally has a more engaged audience due to the nature of the programming, and thus if 2UE can successfully build a reasonably sized female audience in that timeslot, then they could attract a new set of advertisers. Alas, I fail to see how The Two Murrays could retain that type of audience, and think Stuart Bocking would be much better at retaining a female audience, as women seem to absolutely love him.
***
Why telecommunication monopolies are bad
On Thursday, Telstra suffered a rather nasty outage on their network, apparently caused by an issue between themselves and Dodo, which took down their entire Australian data network for the better part of an hour. This caused issue beyond Telstra as many other internet service providers use Telstra’s network for various bits of their connections, however as other providers also hook in to networks other than Telstra’s network, many were able to route around Telstra and minimise the disruption for their own customers.
Some providers, my ISP Internode included, had almost no disruption as Telstra are not their primary network provider.
It’s a bad thing when a large player has an issue, but imagine what would happen in the case of a monopoly. The monopoly goes down, and this takes everyone down.
Now, aren’t you glad that in the not-too-distant future, everyone is going to be relying on the infrastructure of the National Broadband Network?
Ahh yes, the government-owned NBN Monopoly…is it any wonder that some worry about the possibility of the government having a “kill switch” for the internet once the NBN is in place? Even without a kill switch, the NBN will make us all reliant on a single network, which is precisely what the distributed nature of the internet was designed to prevent. It’s certainly not what I call “progress”.
***
A review (well, almost) of Tinker Tailor Soldier Spy
On Friday I went along to Dendy in Civic to see Tinker Tailor Soldier Spy, a movie which is set during the cold war years and involves a sacked British spy being asked to investigate the possibility that there is a Russian spy embedded at or near the top of MI6.
The movie is quite dense, and requires a lot of attention. Turn away or lose concentration for a minute, and you will miss vital information. This is a bit of a problem as the movie also makes you think, and there’s not a lot of time between informative bits of the movie in which to think.
It’s a very enjoyable movie, partially because it doesn’t waste time explaining things which are patently obvious, and is therefore aimed at an audience which enjoys working things out for themselves.
Without giving away any detail of the ending, I will say that it leaves you somewhat satisfied, but still wanting more, and also leaves you thinking and putting together some of the dots that the movie doesn’t fully explain.
I enjoyed it, but want to see it again on DVD (yes, I am one of those people who has not upgrade to Blu Ray yet) so that I can pause and rewind the movie occasionally to check things.
The movie is rated MA, but I can’t work out why. “Strong Violence” is the reason according to the consumer advice, but the violence in the movie is really extremely intermittent and no worse than a shooting or two, and a beating. Even with the sex scenes, I see no good reason for this to be rated higher than M.
Four and a half stars from me. I would have given it five stars if the movie had taken just a bit more time to explain the ending. Then again, maybe it did, and I missed those plot points while I was thinking.
***
Mount Majura in the fog
Finally, a photo to leave you with on a mostly cloudy day in Canberra. It’s not from today, but was a nice sight earlier in the week anyway. Mount Majura, with the airport radar obscured by fog.
The bit which I find interesting is that police also allege that this man was responsible for the attack which brought down Distribute.IT, a wholesale service provider of website hosting, domain names and the like. Distribute.IT was a fairly large player in the Australian market, providing wholesale services to many of the other players in the market.
The attack on Distribute.IT resulted in the total loss of somewhere in the order of 4,000 websites and chaos for the owners of many thousands of domain names, not to mention the retail service providers who had to deal with the fallout from it all. For .au domains, the chaos was slightly more contained as core systems (not run by Distribute.IT) which allow for the domains to be transferred to other providers continued to work, however for non .au domains, such actions were not possible and thousands upon thousands of domains were left in limbo…still operating to the extent of allowing traffic to be directed to appropriate servers, but unable to be managed in any way by their owners, and unable to be renewed if they were due to expire, which some did.
Eventually another provider, NetRegistry, bought Distribute.IT’s assets without any of their liabilities and set about restoring the horribly compromised Distribute.IT systems to some form of functionality before moving customers across to their own systems. While debate rages about whether NetRegistry’s move was the best possible outcome (moves were afoot by authoritative bodies within the industry to dissolve Distribute.IT’s domain registrar accreditation which may have resulted in people being able to move their domains to other providers more easily, but could also have been very messy) and I don’t propose to try and decide which option would have been better, what I can say is that the full functionality of the management side of the affected domains has still not been restored, and that this hacking has resulted in many thousands of hours of lost productivity throughout the Australian internet services industry and in other industries which rely on it, such as businesses with online stores.
I think that this is a much bigger and more interesting story than an intrusion in to the systems of a company which happens to have an agreement with NBN Co. and am somewhat disappointed that it won’t get anywhere near the amount of coverage, although I suppose when it is all added together and you take in to account the fact that the man who police allege is responsible for it all has no formal qualifications in IT whatsoever, it does go to show what many people in the IT industry have been saying for a very long time. Experience trumps qualifications every time.
It’s a question I find myself asking, given how the Civic Exchange is missing from the list of exchanges to be affected by Internode’s backhaul provider’s imminent network maintenance.
During the times above customers will experience a lack of data flow and/or authentication for upto 60mins whilst our backhaul provider conducts software upgrades on their network.
This makes me wonder if the Civic Exchange (which I am connected to, and have yet to publish photos of), is the only exchange, or one of a very limited number of exchanges, in the ACT to have a direct connection to the rest of the Internode network, and if the rest of the exchanges connect via the “backhaul provider” and the Civic exchange.
This kind of thing, for better or for worse, fascinates me…in much the same way as it would fascinate me to see where my phone line goes within the Civic Exchange.
It’s not the first time that this sort of test has been done, and it probably won’t be the last either, but it’s time to knock the stupid theory on the head once and for all.
ABC TV’s Hungry Beast program have found that a carrier pigeon is able to transport a 700MB file between two rural towns, more quickly than a car or the Internet. Apparently this makes pigeons faster than the Internet, supposedly dispelling Kevin Rudd’s theory that we would be worse off under a Liberal government which he seems to think would replace the Internet with carrier pigeons.
In terms of raw throughput, they may be right. The pigeon took one hour and five minutes, which is an average speed of 179.5 kilobytes per seconds. The car took a bit longer…and here’s where the test falls down on throughput…the Internet connection dropped out a number of times and didn’t finish the download, which says more about the phone line used for the Internet connection than anything else.
As it happens, the test is very wrong on throughput, at least in areas with ADSL 2+. On my home connection, I can regularly get downloads of a bit over 2 megabytes per second (2,000 kilobytes per second), which is more than ten times the speed of a pigeon.
That said, the pigeon test can be debunked even further, as the test only takes in to account raw throughput of large files, and completely ignores the way that the Internet actually works.
Take what happens when you visit the home page of this blog for example. Firstly, your web browser sends a request to the server for the page, then the server sends the raw HTML code of the page back to your browser. Your browser reads this, and generates a new request for the css stylesheets as well as every single unique image on the page (16 at the time of writing) as well as all of the embedded content such as YouTube videos of which there are a few, and the servers responsible for these images and embedded content send the requested data back to your browser. If you then go and watch one of the YouTube videos, the browser has to request that, and YouTube’s servers send the data back to your browser.
On the Internet, this doesn’t take very long. Requests go back and forth in moments, and it’s the larger bits of data (images, videos etc) which take time to download due to bandwidth restrictions.
You try doing that with a set of carrier pigeons. This site is hosted on a server in Melbourne, and I’m in Canberra, so your calculations will vary depending on your location, but let’s assume that the news report is accurate and that pigeons fly at about 130km/h (which sounds dubious to me, but we’ll run with it). Melbourne is about 650km away if you go in a straight line, so it would take a pigeon five hours to travel that distance.
Imagine that. You request my website at 7am on Monday, the pigeon arrives in Melbourne at midday, and returns with the HTML code of the website at 5pm. Your browser then requests the css stylesheet and, say, nine images, because you only have ten pigeons at your disposal…they are a finite resource after all. The pigeons arrive in Melbourne at 10pm, and get the data back to you at 3am Tuesday. You now have the stylesheet, so the formatting looks about right, and you have some of the images, although some of the formatting images are linked from the stylesheet so the site is still a bit odd in many places. Your browser requests the rest of the images and the embedded YouTube players, the pigeons get to Melbourne at 8am, and bring the data back to you at 1pm.
So, the total time required to load just the front page of this website via courier pigeon is 30 hours. This would not get any faster if you had more pigeons either, as you wouldn’t have known about the formatting images until you got the stylesheets back.
Thanks to browser caching of formatting images and stylesheets, you might be able to reduce the loading time of subsequent pages on this website to twenty hours, but that doesn’t really make the site any more useful to you.
And just think…if it takes that long to load a domestic webpage, how long would it take to load a website from overseas? It’s about 15,000 kilometres to the US, which is roughly 23 times the distance from Canberra to Melbourne, so if we multiply the domestic loading time of 30 hours by 23…ye gods! It would take 690 hours (28 days and 18 hours) to load the front page of this website. Yes, that’s right, a month to load one page.
And none of this even takes in to account the extra hours required for DNS lookups before you can even send a request to the appropriate server.
All I can say is thank God the ABC and their pigeons don’t run the Internet!