Subscribe to the Bombay Chartered Accountant Journal Subscribe Now!

Tech Update

Computer Interface

Most of you may have read the various news reports about
results of the financial stress review recently concluded in the European Union.
The primary aim of the review was to assess the strength (or weakness) of banks
to meet the challenges prevailing currently. Fortunately, the results brought a
fair amount of cheer for all and the sundry. All but 5 of the banks passed
(quite opposite to the recently announced CA final results in which less than 5%
passed). But while the various members of the finance ecosystem were doing an
assessment exercise, members of the mobile ecosystem were doing some
housekeeping themselves. The media was filled with reports of certain emerging
trends, setbacks, projects / ventures being shelved.


Emerging trends :

The word ‘trend’, in general, means the popular taste at a
given time, a general tendency to change, a general line of orientation or a
general direction in which something tends to move
and then again it
also means to turn sharply, change direction abruptly. It’s funny
when you stop to think about it, how the same word conveys different messages,
in this case more or less opposite meaning. Trends for some is the most
obvious thing which makes choice easy and then there are others who would say
they never saw it coming. Not convinced ? Look at the state of the US financial
system and the arguments on the current scenario . . . . many say we went hoarse
shouting bloody murder and the Feds says we never saw it coming.

Coming back, here are some fairly interesting developments
(trends) that may interest you:

Broadband service a legal right in Finland :

Apparently, Finland is the first country in the world to make
access to broadband services a legal right for its 53 lakh citizens. Under the
new law, which came into effect earlier this month, telecommunications companies
will be obliged to provide all citizens with broadband lines that can run at a
minimum of 1 Mbps (megabites per second). While making this announcement, the
Finnish Ministry said “Internet was part of everyday life for Finnish people and
it was the government’s priority to provide high speed Internet access to all.
Internet services are no longer just for entertainment, Finland has worked hard
to develop an information society and a couple of years ago we realised not
everyone had access”. It is believed up to 96% of the Finnish population are
already online and that only about 4,000 homes still need connecting to comply
with the law. The government has also promised to connect everyone to a 100 Mbps
connection by 2015.

You may recall, the Indian Government had also made certain
promises (among others) when it unveiled India’s broadband policy in 2004.
Instead, all we’ve got so far is more dug-up roads and the ever-increasing
frequency (not to mention duration) power outages. Suffice to say we have a long
way to go for now.

E-reader Kindle outpaced sales of hardcover books on Amazon :

Earlier this month Amazon.com, one of US’ largest
booksellers, announced that for the past three months, sales of books for its
e-reader, the Kindle, outnumbered sales of hardcover books. In that time, Amazon
is said to have sold 143 Kindle books for every 100 hardcover books (including
hardcovers for which there is no Kindle edition). Amazon.com added that in the
past four weeks sales rose to 180 digital books for every 100 hardcover copies.
Apparently the pace is quickening. It may interest you that Amazon has 630,000
Kindle books, which is a small fraction of the millions of books sold on the
site.

Meanwhile, Penguine launched the first electronic book with a
video tie-in. Penguin Group and Liberty Media’s Starz Media began selling the
first version — for Apple’s iPad — of a novel with accompanying video from a TV
mini-series based on the same tome. News reports suggest that the deal may serve
as a model for other cross-media partnerships. Priced at $ 12.99, above the
$ 9.99 industry norm for e-books (read Kindle books), Penguin’s iPad
version of Ken Follett’s 12th century England epic ‘The Pillars of the Earth’
will let users read the novel and watch scenes from the mini-series.

While book lovers mourning the demise of hardcover books with
their heft and their musty smell, publishers may need a reality check. Here’s
why. A CEO of media company, which advises book publishers on digital change
said that “This was a day that was going to come, a day that had to come”. He
even predicted that within a decade, fewer than 25% of all books sold would be
print versions. Another CEO commented that “the shift at Amazon is
astonishing
when you consider that we’ve been selling hardcover books for 15
years, and Kindle books for 33 months”. (there you have it, the obvious
and the oblivious — and they coexist in the same business).

India unveils prototype of $ 35 tablet computer :

It looks like an iPad, only it’s 1/14th the cost : India has
unveiled the prototype of a $ 35 basic touchscreen tablet aimed at students,
which it hopes to bring into production by 2011. “This is our answer to MIT’s
$ 100 computer,” Human Resource Development Minister Kapil Sibal told the media
when he unveiled the device.

In 2005, Nicholas Negroponte — co-founder of the
Massachusetts Institute of Technology’s Media Lab — unveiled a prototype of a
$ 100 laptop for children in the developing world. India rejected that as too
expensive and embarked on a multiyear effort to develop a cheaper option of its
own. Negroponte’s laptop ended up costing about $ 200, but in May his
non-profit association, One Laptop Per Child, said it plans to launch a basic
tablet computer for $ 99.

News reports indicate that the tablet can be used for
functions like Word processing, web-browsing and video-conferencing. The tablet
doesn’t have a hard disk, but instead uses a memory card, much like a mobile
phone. The tablet design cuts hardware costs, and the use of open-source
software also adds to savings. It has a solar power option too, though that
add-on costs extra. Without discounting the cost, it seems like a real blessing
when one considers the ever-increasing frequency, not to mention the duration,
of power blackouts in India. A Ministry spokesperson, said falling hardware
costs and intelligent design make the price tag plausible. Apparently, several
global manufacturers, including at least one from Taiwan, have shown interest in
making the low-cost device, but no manufacturing or distribution deals have been
finalised.

India plans to subsidise the cost of the tablet for its students, bringing the purchase price down to around $?20. Kapil Sibal turned to students and pro-fessors at India’s elite technical universities to develop the $?35 tablet after receiving a ‘lukewarm’ response from private sector players. The stated goal is to get the cost down to $?10 eventually.

If the Government can find a manufacturer, the Linux operating system-based computer would be the latest in a string of “world’s cheapest” innovations to hit the market out of India, which is home to the 100,000 rupee ($?2,127) compact Nano car, the 749 rupees ($?16) water purifier and the $?2,000 open-heart surgery. But given the past, one doesn’t know whether this project will die a quick death within this year, or a painful government-funded one over the next two.

Tax returns on Twitter:

Before you jump to any conclusions, it ain’t happening in India yet. Savvy politicians are no strangers to Twitter and Facebook, using it for their own political ends (Obama, Shashi Tharoor, Lalit Modi to name a few of the celebrated users).

Incidentally, Filipinos are among the most prolific users of social networking and text messaging in Asia. Earlier this month, the Philippines’ new government turned to social networking, using it to meet some serious social and economic ends for the country. When most nations are fretting about their fiscal deficits, Manila thought of an innovative way out to bridge the gap: enlisting Twitter and Facebook to boost tax collections. Honest citizens will be allowed to complain about tax evasion and corruption, by posting an update on Facebook or Twitter, when they smell a tax cheat.

No prizes for guessing if this would work in India. After all, India is not just growing to be the land of enthusiastic tweeters, but also the very land of tax evaders and Swiss bank account holders. The question that begs to be answered is, are Indians morally outraged enough about cheating the government that they start telling on their neighbours or will they continue to remain mute spectators? (Jaago re!!!…….)

(The concluding part of this write-up will be printed in the next issue of the Journal)

Social Networking: boon or bane

Computer Interface

For the uninitiated, initially, social networks were networks
or meeting places set up by people who wanted to ‘keep in touch’ or team up
after starting their career. Facebook as we know it today, was akin to a
school/college yearbook — a photo album, the only difference being that it was
in the form of an electronic billboard, where one could look up old colleagues
and exchange information. With added impetus from technological advancement,
developments in networking technology and mobile phones, over time this
electronic billboard evolved to social networks as we know them today.
Presently, social networks, among other things, are :

à Forums for sharing materials;




à Virtual market places — to meet like-minded people,
share videos, pictures, thoughts, etc.


Social networks are unique in the sense that, while they
serve ones personal needs, they are equally useful in meeting one’s business or
professional needs. The following examples would illustrate this :



à
Social networks allow you to keep in touch with family members staying in a
different city (Yes, I am aware that we have the old & faithful postcard,
telegram, and yes the telephone rentals have dropped drastically so we can
always call our friends or send an sms or chat with them on the net, but
imagine reaching out to all your friends and relatives at one go with added
interactivity);


à
Social networks give you an impression of being in a space of your own. They
allow you to mingle with like-minded communities
(discussing ideas or experiences on your latest trek, purchase of new
camera, car, etc.)
;


à
Enhance social and political communications (Apparently social networking
contributed significantly to President Obama’s campaign);


à B-
Schools using it to send out information to its students

(IIM Calcutta took its first step in Dec. 2008, from
breaking news, blog links to CAT and campus-placement updates, the tweets on
‘IIMC’ reflect a broader use of Twitter than most celebrity users seem able
to comprehend).




Having understood this background, lets get on with the
basics.

There are various types of online social media — from social
networks of friends and professionals, to microblogging services, to video
sharing sites. To name a few:

Online friends networks:

Facebook:

The world’s largest social network, with hundreds of million
users, began when a small group of Harvard students, led by Mark Zuckerberg,
decided to keep in touch with each other. It soon opened out to other US
campuses and eventually in 2006, to everyone.

Orkut:

At one time Orkut India’s most popular social network, this
Google-owned service was set up by former Google engineer Orkut Büyükkökten in
his spare time. Once a hit with users, it is far behind in the global popularity
stakes. Orkut has faced some issues because of its previously open nature. After
legal problems in 2007, Orkut substantially cleaned up the network, but by then,
the damage was done — ‘high-end’ users had begun switching over to Facebook.
(Incidentally have you tried google buzz ?)

MySpace:

Quite popular with musicians and actors, who use the site to
host music and movie clips, this site was picked up by Rupert Murdoch’s NewsCorp
a few years ago and its immense popularity made Google give it a lucrative
advertising deal.

Video sharing:

YouTube:

YouTube has started a video revolution — it’s as simple as
that. The service — which allows anyone to upload video clips on to the net —
from your baby’s first steps to a music video that you recently shot — commands
a big chunk of Internet traffic today. According to estimates, every minute of
the day, over 10 minutes of content is being uploaded on to the service. (In
fact, you can watch IPL3 matches on this network).


Other video sharing services:

Hulu is a video service promoted by US TV network NBC and has
high-quality online broadcasts of their shows. Apparently, users from India
cannot access Hulu.

Other sites include Vimeo and DailyMotion.

Online professional networks:

LinkedIn:


According to last year’s statistics (current number would be
higher), there are 41 million users on LinkedIn, of which two million are from
India (the second-largest user base after the US). Virtually every large company
and executive has a LinkedIn account and there are examples galore of how India
Inc. is using LinkedIn to find talent and do more. Extremely popular among India
Inc. and growing by the day. This site is possibly unique among social networks,
in the sense that it claims to be profitable (i.e., Linkedin is showing profits)
through advertising and ‘premium’ membership.

Blogging:

Most blogging sites are also ‘social media’ by definition —
they allow anyone and everyone to create a blog. Also, if the blogger allows it,
anyone with net access can post a comment on the blog, which can be moderated.
Blogging is the oldest form of ‘read-write’ online social media, but has now
reached a stable phase. The most popular free blogging services online where
anyone can set up a blog are :

  • Blogger/Blogspot


  • WordPress


  • LiveJournal

Microblogging :

Twitter :

This is a blazingly fast-growing service : one estimate put Twitter’s growth at a staggering 1,382% a month with an estimated 100 million users. (A Harvard study estimated that 10% of these users, by and large, cre-ated 90% of the content.) Twitter essentially allows users to send out their thoughts in 140 characters or less. Only a third of Twitter users are active, though, and India has an active ‘Twitterati’ of an estimated 10,000 people. Several Indian companies are now embracing the service. Immensely popular and highly useful during breaking news events such as 26/11. Some of the users who have left their indelible mark using this tool — Sashi Tharoor and of course Apro SRK.

While some dismiss them as a waste of time, Internet sites such as Facebook, Twitter, and LinkedIn have exploded in popularity, giving easy access to a potentially huge amount of new business.

Business and social media :

The ultimate transformation that is taking place today is within the business landscape, worldwide— and increasingly so in India — where compa-nies are beginning to leverage informal social net-works to engage customers, soothe ruffled feath-ers, strengthen their brands and even hire people. For companies in India, the reasoning is simple : While Indian PC and Internet penetration rates are relatively lower than the West, India has one of the largest Internet population in the world — some 60 million regular users (not including mobile access). Moreover, these users are the most sought-after customers with high disposable incomes, and companies with clear online media plans are waking up to the fact that they can reap the benefits of engag-ing with this audience. Those that don’t, risk losing the customers that they already have or slipping behind their more savvy competitors.

Here is a real life instance of how social media can influence change :

Take a look at the interactive digital marketing site that Tata Motors built when the Nano was launched. This site had games built into it, where people could customise colours and pick their favourite ones —thereby (ahem) sneakily helping the car company figure out which ones to use on the Nano. (A clever idea, but far removed from a social media forum.) However, when Tata Motors did launch the Nano, there was no mistaking its intention to use a full-fledged social media strategy. The company set up groups on Facebook and Orkut hoping to target the numerous official ‘Nano’-centric groups that had parked themselves on the site. To its complete surprise, it found that one unofficial group on Orkut dwarfed the official ones — and it would have been a fatal mistake to ignore members not under the official Nano fold. A spokeperson for the company said “We engage with people on these sites, too. We react to criticism of our car and try to explain our position. Also, we often find that before we can react to the criticism, there are other members who come up to defend the car.” As a matter of fact even presently, the official groups on these two sites, at around 17,000 members, are much smaller than the largest un-official group on Orkut with around 52,000 members.

Here’s another example :

Maruti Suzuki India is, strangely enough, a pioneer in online social marketing. Realising that there are several online communities for the highly popular Swift, it has created an online platform to bring together the 2,500 disparate online Swift users’ clubs in India. Earlier this year, the company actively enlisted bloggers and talked to the community during pre-launch activities for its latest Ritz.

There are others — Herseys, Dominos, Apollo Hospitals, Nokia — to name a few.

Avoiding traps :

Its important to understand that social media isn’t for everyone and should not be used for everything. For instance, the Chief Marketing Officer of a large corporate group shared his experience saying online social media is not an ideal platform for business to business (B2B) interactions. “It is a great way of getting messages about your company across, but I would neither buy nor I would sell anything using social media,” she says. Also, having a presence in online social media or running ads there doesn’t mean that the company will emerge an overnight success. In fact, far from it. “It is a misconception among many that this is a procedural thing, which it’s clearly not. It is a highly creative space that requires that marketers identify the space, the nature of stakeholders involved, what makes people tick within that space and, importantly, to listen to people—and not try and sell things to them.” According to experts, the biggest mistake that anyone can make is to use the medium to push their products.

Another problem is that of measuring success. Even though there are advanced analytical tools available on the Internet, classifying a ‘successful campaign’ in social media is extremely difficult and can also be manipulated using something called ‘click fraud’. There are few benchmarks to measure success online unlike television adverts. A company can claim any number of sign-ups for a digital campaign, but never release how many were translated into sales. Also, beware of social media experts. The landscape is littered with them, many of whom have no legitimate professional experience in the field. Much like the Internet company era, social media is the new in-thing and these hucksters are simply surfing the next big wave, hoping to get rich.

The second part of this article will be printed in the next issue of the BCAJ. Watch this space for the pitfalls and the dark side of social marketing.



Tech update

Computer Interface

(This is the concluding part of this write-up. It has been
continued from previous month’s Journal)

 

Other recent developments in the mobile ecosystem :

iPhone 4’s antennae problem. Apple Inc received a lot of bad
press this month. There were several customer complaints about the design of its
phone antennae.

Some complained that the smart phones, which were launched a week ago with
block-buster sales, when cupped in a way that covers the lower left corner,
strangles telecom service signal strength.

Apple Inc, had to (publicly) accept that its iPhones
overstate wireless network signal strength. Apple apologised to customers in an
open letter and said it was “stunned to find that the formula” it uses to
calculate network strength “is totally wrong” and that the error has existed
since its first iPhone. The letter also said that “Users observing a drop of
several bars when they grip their iPhone in a certain way are most likely in an
area with very weak signal strength, but they don’t know it because we are
erroneously displaying 4 or 5 bars,”. Apple shot down users and outside
engineers who said the signal problems were due to faults in its new antenna
system. The antenna is incorporated in the casing. The company stated that “big
drop in bars is because their high bars were never real in the first place”.
Further adding that when users were noticing a dramatic drop in the number of
signal strength bars on their phone’s display, it was likely due to weak network
coverage in that area.

The company said the incorrect formula was present in the
original iPhone — released in 2007 — and promised to fix it by conforming to
AT&T guidelines for signal strength display through a free software patch that
would be issued within a few weeks. The software update will also be available
for the iPhone 3GS and iPhone 3G. Apple maintained the iPhone 4’s wireless
performance remains “the best we have ever shipped.” It also reminded user
they could return their smart phones within 30 days of purchase for a full
refund.



A direct result of this issue is that






  •   a suit has been filed against Apple for the poor reception.



  •   Another class action suit has been admitted on the issue of restrictive
    trade practice—Apple and AT&T’s marketing tie-up.



  •   A major (reputed) consumer goods publication in the US refrained from
    giving iPhone 4 the much coveted ‘Buy’ recommendation.



  •   A senior member of the team (directly) responsible for the antenna has put
    in his papers. Its being called Applegate’s first casualty.

An indirect consequence of this issue is that


  •   Apple has “earned” the dubious tag #fail on twitter.



  •   Sales of Android phones are picking up.


As per latest reports, they are higher than iPhone 4.





Microsoft discontinues Kin after 48 days. Just 48 days
after Microsoft began selling the Kin, a smart phone for the younger set, the
company discontinued it because of disappointing sales. The swift turnabout for
the Kin, which Microsoft took two years to develop and whose release was backed
with a hefty ad budget, is the latest sign of disarray for Microsoft’s recently
reorganised consumer product unit. While neither Microsoft nor Verizon Wireless,
which sold the phone exclusively, disclosed the sales figures, media reports
suggest that sales were disappointing. In fact, Verizon is said to have slashed
the prices of the phones to $50 from $200 for the higher-end model and to $30
from $150 for a stripped-down version. Microsoft said it would cancel the
pending release of the Kin in Europe and would work with Verizon Wireless to
sell existing inventories. Microsoft indicated that it would shift employees who
worked on the Kin to the team in charge of Windows Phone 7, a coming revision of
Microsoft’s operating system for smart phones, which is due in the fall.

Kin, according to the grapevine, was dubbed an absolute
failure. It surprised many that Microsoft, often regarded as a company known to
sticking with new products and improving them over time, killed a product so
quickly. Microsoft’s consumer products unit has been struggling to offer a
credible competitor to other Apple products. It has chased the iPod with its
Zune for several years with little effect. Apple’s iPhone, as well as an array
of smart phones powered by Google’s Android software, are more recent
challenges. Microsoft also recently cancelled a project to develop a tablet
computer that would compete with Apple’s popular iPad.


IBM endorses Firefox as in-house web browser.

New York State-based IBM, known by the nickname “Big Blue,”
has a corporate history dating back a century and now reportedly has nearly
400,000 workers. Firefox is the second most popular web browser in an
increasingly competitive market dominated by Internet Explorer software by
Microsoft. Despite this fact, technology giant IBM wants its workers around the
world to use free, open-source Mozilla Firefox as their window into the
Internet. All new computers for IBM employees will have Firefox installed and
the global company “will continue to strongly encourage our vendors who have
browser-based software to fully support Firefox”.

Making Firefox the default browser means that workers’
computers will automatically use that software to access the Internet unless
commanded to do differently. Rumour has it, that going forward, any employee who
is not using Firefox will be strongly encouraged to use it as their default
browser. The feeling with the management is that while other browsers have come
and gone, Firefox is now the gold standard for what an open, secure, and
standards-compliant browser should be. Open-source software is essentially
treated as public property, with improvements made by any shared with all.


While Firefox is the second most popular web browser, Google Chrome has been steadily gaining market share. Last week, it replaced Apple Safari as the third most popular web browser in the United States. The take away from this is that we will continue to see this or that the browser become faster or introduce new features, but then another will come along and be better still, including Firefox.

At the cost of repeating myself …. (refer to BCAJ Jan 2010 issue) the survival in the new mobile ecosystem is going to be really very tough. The losses listed in this battlefield as of now :

  •     Kin — Microsoft’s smart phone
  •     Google is planning to hang up on Nexus 1 and plans to shelve Wave
  •     Nokia is looking for a new CEO
  •     Applegate’s first casualty
  •     Blackberry is battling shrinking market share (losing to iPhone and Android phones) and is trying to crawl back in to the limelight has recently launched Blackberry Torch.

The hunter is now the hunted.

C’est la vie.

Social Networking: Boon or Bane

Computer Interface

Part-2

(I would like to clarify at the outset that this write-up
does not seek to malign or discredit anyone/any site in particular. My
observations and comments are merely a reflection of what is already available
in public domain.)

In part 1 of this write-up, we briefly discussed the growing
importance of social media and networking in today’s business and personal
environment. While there are many who swear by this recent (and a highly potent)
phenomenon, there is a growing number of users who after having burnt
themselves, speak in hushed tones about the disasters that have already struck
and the ones that are waiting to happen.

The hype :

Notwithstanding the perils, most people are happy to join the
bandwagon. In general, if you ask anyone the real reasons for him or her joining
Facebook or Twitter, the responses you get will range between the standard of
“keeping in touch with friends”, “it’s hip”, “you have got to be in the groove”,
etc. Many are on the network for the heck of it (which in reality translates to
— due to peer pressure). Largely the popularity is due to the hype about social
media (including “if you are not on it, you are toast” types), and the fact
remains that most new joiners are clueless about what they are signing up for.

Interestingly, one recent statistic (which was proudly
reported in all leading forms of news media) suggests that teenagers, who (for
the record) are the most prolific users of social networking sites, post as many
as 100 status updates on their social networks.

Hmmm ! ! ! . . . 100 status updates . . . Considering that in
a day you have 24 hours, out of these 24 hours you eliminate 8 hours for your
natural instincts for food and sleep and . . ., the remainder is 16 hours.

In these balance 16 hours the user would have 960 minutes to
post these 100 odd messages. If that is the case, to send out 100 updates, the
required (run rate) frequency would be one update every 9.6 minutes. Wow ! ! ! !
That calls for an ‘AWESOME ! ! ! !’

No wonder these teenagers turn around and question the very
existence of life beyond Twitter and Facebook. There just isn’t any time left
for them to do anything else.

Any sane person would equate this awesome feat with
obsession. You may not believe this but do a search on Google (or Bing or Yahoo
for that matter) and you will find reports about the death of a toddler (there
could be more). Apparently the toddler died due to malnourishment. Apparently a
Japanese couple was so busy raising their virtual child on a social
network game that they forgot to feed their child in the real world. Surely now
the only word coming to your mind would be ‘SHOCKING’.

The perils :

Here’s another example : Hordes of people while registering
with such sites, part with personal details (and that to in amazing detail mind
you). Details which by any rate are sensitive and of a personal nature. These
sites ask you who you are, where you live, what you do, when you do it. They
want to know how-when-where — with who . . . . blahblahblah . . . . They want
details of all your friends, relatives, acquaintances, etc. They will even do
the good thing of asking the same from all your connections. Without these you
are not ‘assimilated’ (sounds like the Blog in Star Trek — NG) or not a
‘part of the gang’. The depth of the information sought is more detailed than
some of the best KYC (know your
customer) checklists I have seen in the recent past.

You know what the best part of this is . . . . the user
concedes with most of these details (which are very sensitive personal)
willingly. So whats wrong with that ? ? ? ? Well for starters, nobody reads the
disclaimers, even worse most people can’t comprehend the perils of not doings so
before signing up . . . . yes It’s the part where the users accept the terms and
conditions without reading (let alone understanding the consequences). If you
ask me its only a matter of time before this information falls in the hands of
all those wrong sorts of people. Mind you this information in one form or the
other, at one time or the other, can fall in the hands of telemarketers,
scamsters, your boss or bosses and ofcourse (since this magazine is read by a
lot of tax practitioners) u know who. You’re thinking “THAT’S IMPOSSIBLE” or
“THAT’S EXAGGERATING IT A BIT TOO MUCH” . . . . is it ? ? ? ?

Lets take a simple example, say, Mr. A is also Mr. Popular on
the social network. He starts updating (speaking his mind). All the updates
instantly reach all the people connected to him. Similarly there would be other
people on his network who update their status. The natural response would be
that . . . . That’s the intention, they are my friends, colleagues, family . . .
. they wouldn’t do me any harm.


That, my friends, is the proverbial weakest link in the
chain.
While Mr. A may have some control who is connected to his network
and who accesses his information, but the same cannot be said about the people
on his network. It may be that he is not very friendly with someone (could even
be his boss) and that someone is very chummy with someone else on A’s
‘controlled’ network. Mr. A may or may not be aware or might not even approve
(or for that
matter disapprove) of this. Needless to say that even a single slip-up by Mr. A
or his connection, would (very harshly) change their opinion about ‘the theory
of six degrees of separation’. Still don’t believe me, do you ? ?

OK here goes nothing . . . . if you read all the recent
reports on the ongoing spat between a certain Member of Parliament and the head
of a popular sport venture. Several media reports suggest that the entire
episode would never have assumed the proportions that it did, had it not been
for the ‘tweets’ between the two parties. Whats more these tweets (and many more
related to the other alleged misdemeanours) are likely to be used as evidence
against them (as I recall, one of the articles cited a similar spat about a
decade ago and how the parties involved could get away by denying everything,
but not this time, all due to the provisions of the Information Technology Act,
2000). The OUTCOME — the media now calls one a twit who tweeted too much while
the other party is waiting for the decision of the third umpire.

The scams :

As stated earlier, depending on who has access to this
information, the user can be scammed, used and abused, taken advantage of, taken
to task —or all of above. Here’s another instance :

I’ve Been Robbed ! Western Union Me Money !

When you accept someone as your friend on the
net-work, he has access to most of your updates, your profile, your
pictures, adventures, friends, etc. Surely you trust someone as a friend
when you accept his invitation on Facebook or Twitter or else why would
you give an absolute stranger or an acquaintance access to your
personal information— you cant be that naïve ! ! ! ! Then one fine day,
you’re browsing around social networking site and suddenly one of your
friends IMs you to tell you that they’re stuck in another country,
they’ve been robbed, don’t have a wallet, and need money to get out of
the country. It’s a horrible situation, but what are the odds that they
found a computer to log on to in order to in-stant message you ?

So
what do you do . . . . you ask him for the details and do the good
thing (ahem ! ! ! not the smartest thing) and wire him the money. At
that moment you could be singing praises about how social net-working is
a boon, but it is more likely that when the true picture is revealed
the you may horrified by the fact that it was your folly due to which
you were scammed (i.e., scammed in one of the oldest scams on the
Internet).

How is it possible ? ? ? ? Its fairly simple actually.
All the scammer needs to do is (a) access to one account on the social
network, (b) collate all possible personal information, (c) list out all
the (gullible) ‘friends’ and then he can start the ball rolling.

In
this instance a hacker/scammer gains access to the account of a
(trusted) friend. He would know through the frequent status updates what
you and your friend are up to and how he can exploit you. The scammer
would thereafter, using one of the most common ways, manipulate others
for financial gain. This also called the London scam, or Western Union
scam. In most cases, users get fooled because the scammer (being
proficient in his art) will portray a very convincing picture about his
predicament. The scammer uses all the personal information (available on
the hacked user account as well as your account) to gain your
confidence (and not to mention. . . . your money).

Don’t believe me ? Run a search on Facebook or google, you will find that there are lots more like you.

While
these were limited examples, news reports are littered with other
‘disasters’ (reset password, sign up for contests, etc.) — you can
search Google for Facebook and Twitter scams to learn about more scams.
There are several examples of successful businesses shooting themselves
in the foot with social networking if one is really seriously
considering investing money on social media-based marketing.

Having
digested the above reality (not a reality show mind you or is it
reality show — cant tell the difference these days), the moot question
is whether social media and networking is as good as its cracked out to
be or is there something more than what meets the eye ? ? ? Is it really
worth all the time and energy or is this a big scam ? ? ?

While I
don’t have any definite answers, I do have these glaring instances
which force me to think twice (no connection to a book with the same
title— published by Havard Business Press) which bring out the darkside
of social networking and provide some basis as to why one should be
cautious of the brouhaha that’s being raised about social media and
networking.

Tech update

Computer Interface


Computer Interface

Samir Kapadia
Chartered Accountant

Tech
update

While change is inevitable, keeping up with change is also a
challenge. This month I have chosen three hot topics (while there were many more
to choose from) I thought would interest you more than the others. The topics
are :

(1) Social networking

(2) 3G auction and rollout

(3) Microsoft launches Office 2010

Social media and networking — To be or not to be :

“To be or not to be — that is the question”. While many would
immediately recall these words quoted straight out of Shakespeare, some (movie
buffs like me) would think about Mel Brookes performing on the stage and
delivering his version of Shakespeare in the movie “To be or not to be”. The
scene where he repeats these words (over and over again) is one of my all-time-favourites.
No matter how many times I watch the movie, I keep coming back for more. But for
many netizens, this is a very peculiar question to ask when one is discussing
the merits and demerits of social media and networking tools.

While there is no doubt that social media and networking have
changed the very face of marketing, recently (particularly last month) social
media and networking have been at the receiving end. Popular sites like Facebook
and You tube faced a lot of flak, in some cases faced bans.

Facebook was in the media for all the wrong reasons last
month. Here are a few instances

  • Facebook faced public
    lash-back and was banned in Pakistan and in Bangladesh over a controversy
    related to a certain drawing. The fallout began in Karachi (Pakistan) where
    people took to the streets protesting against the social networking site.
    These protests culminated to a ban being imposed on the site. Bangladesh was
    quick to follow suit. Needless to say that emotions ran high that week. In
    fact six tech-savy Pakistanis following the furore launched a halal version of
    the Facebook by the name Millatfacebook.com. The question that many are now
    asking is — do we really need another Facebook and what’s next gender
    seggregation ?


  • Facebook was also under
    the spotlight on account of privacy concerns. Facebook’s growth as an Internet
    social networking site has met criticism on a range of issues, especially the
    privacy of their users, child safety, the use of advertising scripts, data
    mining, and the inability to terminate accounts without first manually
    deleting all the content. Facebook Chief Executive Mark Zuckerberg had to
    respond by saying that the Internet social network would roll out new privacy
    settings for its more than 400 million users, amid growing concerns that the
    company is pushing users to make more of their personal data public. A google
    search on this issue gave some very interesting insights related to this
    controversy.


  • May 30-31 went down in
    history as the “quit Facebook day”. There has been a lot of angst amongst
    users on account of the privacy issues, frequent changes in personal settings,
    etc. While the response was hardly noteworthy (some 27000 people quit facebook),
    it appears in India it passed almost unnoticed.


  • Limit on number of
    friends. For many the lucky number is 7, for others the lucky number is 5000
    (Yes, it is Facebook). In case you didn’t know, you cannot have more than 5000
    Facebook friends. While there has been some amount of outcry against this
    ‘arbitrary’ limit, recent news reports suggest that the popular site is likely
    to enforce the limit, much to the disappointment of its loyal followers.


To summarise, while social media continues to grow as a
popular medium, there are questions being raised regarding its unintended
consequences. Hence the question that begs to be answered — “To be or not to be
?”

3G auction and rollout plans :

The recently concluded auction for 3G spectrum has brought in
a lot of cheer for many parties —the winners of the auction, the Government (the
budget deficit may be more manageable) and the subscribers. There is a lot of
excitement about the rollout plans. Equipment manufacturers have already started
dolling out 3G-ready phones and the consumers are lapping the new models. The
noteworthy issues which need to be considered are :

  • In spite of the amounts
    being so large, the entire licence fee was paid by the winners, in one
    instalment. In fact none of the winners asked for an extension (that includes
    MTNL and BSNL — however there are reports of BSNL asking for a refund). A lot
    is at stake. One could say that it is a make-or-break scenario, especially for
    the telcom service providers.


  • When one pays such high
    licence fee, it is natural that the cost of the service is likely to be
    impacted. There are concerns regarding the pricing strategy and how the
    winners would recover these amounts. The pricing strategy is being watched
    very closely. Unfortunately for the subscribers, it is likely to take at least
    6-9 months before any of the winners launch the 3G service.


  • In the meanwhile
    subscribers are going about upgrading their phones. As mentioned, equipment
    manufacturers have already started dolling out 3G-ready phones. The price of
    the new instruments ranges from (as low as) Rs.3500 to (as high as) Rs.35000.
    Naturally, one needs to be aware about what is being offered. While these
    phones may be 3G-ready, performance i.e., download speed, quality of the
    content (picture, sound), battery life, memory can vary significantly. Hence
    be careful, all that glitters may not be gold.


  • As expected (read my
    columns for January-March 2010), the investments and developments in mobile
    technology ecosystem have started gaining momentum. There are a lot of media
    reports about tie-ups for content, new investment in R&D, etc.


There is a lot more action waiting to happen, just wait and
watch.

Microsoft launches Office 2010 :

While I have not been able to check the new offering myself, I did some digging. News reports on this product say the following :

    Microsoft Office 2010, brings a set of important incremental improvements to the office suite. Among them : making the Ribbon the default interface for all Office applications, adding a host of new features to individual applications i.e., video editing in PowerPoint, improved mail handling in Outlook, introducing a number of Office-wide productivity enhancers, photo editing tools and a much-improved paste operation.

    What is being touted as the most important change to Office in years — a Web-based version for both enterprises and consumers and access to Office for mobile phones and other mobile clients. Reportedly, Microsoft has also strengthened the links between Office and various Microsoft communication server products. Apparently, if you use Microsoft Office Communications Server 2007 R2 and Microsoft Office Communicator 2007 R2 with Office 2010, you’ll be able to see the availability status of other people with whom you work and ways to contact them, such as e-mail and instant messaging. SharePoint is even more intimately tied to Office, and lets people collaborate on Office documents.

    The Ribbon feature was introduced in Office 2007, but in Office 2010 a major change has been made to Office’s interface — it has replaced the old menus and submenus with a graphical system that groups buttons for common tasks together in tabs. But apparently, Microsoft didn’t go whole hog with it back then; Outlook, among other applications, was not given the full Ribbon treatment. In this version of Office, all applications now share the common Ribbon interface, including Outlook, OneNote and all other Office applications, and SharePoint. Love it or hate it, the Ribbon is here to stay.

    Email/Outlook users are most likely be pleased with the new version of Outlook, which adds a variety of features designed to help solve the most common productivity problem — e-mail overload. One of the most useful new features is called Quick Steps, which speeds up mail handling considerably. Right-click on a message, and you can choose from a variety of actions to take on it — moving the mes-sage to a specific folder, forwarding it to your manager, setting up a team meeting with its recipients, sending e-mail to an entire team and so on. This new version of Outlook also tackles one of Outlook’s perennial problems

— how poorly it follows threads of messages. There’s a related feature that helps cut down on e-mail clutter — the ability to ‘clean up’ a conversation. When you do this, you delete all of the unnecessary quoted and previous text in long e-mail threads; only unduplicated versions remain. However, once you do that, all of the quoted and previous text and e-mails are actually deleted, not just hidden, so use this feature carefully. It would be more useful if you were given the option of hiding the text, not completely deleting it.

    Not much new in Excel though! Excel hasn’t been touched as much as the other major applications in Office 2010, but its not a total loss, there have been some useful additions. The most important is called ‘Sparklines’ — small cell-sized charts that you can embed in a worksheet next to data to get a quick visual representation of the data. For example, if you had a worksheet that tracked the performance of several dozen stocks, you could create a Sparkline for each stock that graphed its performance over time, in a very compact way. Conditional formatting — the ability to apply a format to a range of cells, and then have the formatting change according to the value of the cell or formula — has been improved as well, including the addition of more styles and icons.

    PowerPoint apparently has entered the video age. 2010 introduces a slew of enhanced video features, although in the Technical Preview not all were working properly. Key among the new features is a set of basic video editing tools built directly into PowerPoint. They’re not as powerful as full-blown video editing software, but work well for common tasks such as trimming and compressing videos and adding fade-ins and fade-outs. Highlight a video you’ve embedded in a presentation, and the tools appear in the Ribbon. Also useful is a set of video controls you can use during the presentation to pause, rewind, fast-forward and so on — something that the previous version of PowerPoint did not have.

I am hoping that I get an opportunity to test this new product soon.

Whatsap

fiogf49gjkf0d
About this article:
WhatsApp is an instant
messaging app (application software for phones). For many this app is a
cheap substitute for SMS text messaging and can be called a
non-Blackberry version of the Blackberry messenger. This app works
across platforms and is easy to use. Readers may find this write-up
informative.

I still remember the day when a close friend of
mine kept pushing me to install this app on my phone. All the while,
trying to convince me that this app was really worth a shot. I was a bit
reluctant and the simple reason was that I would have to pay money to
purchase the app. An unpalatable thought at the time, it would be a
first for me (so far I have installed as many as 91 apps on my phone and
the ratio of paid vs free apps is 2:89). My friend kept chiding me that
the cost in comparison to the benefit was negligible, but I just
couldn’t swallow the thought of paying for an app.

While you may
say that I am tight-fisted, I would prefer the name frugal. But trust
me I am not the only one. If you are not convinced, check out the Maruti
Suzuki ad — where the salesman is trying to sell a luxury yacht to a
‘rich man’, the scene begins with the salesman praising one feature
after another . . . impressive one would say . . . instead the ‘rich
man’ asks ‘kitna deti hai’ (meaning how much mileage does it give) and
then comes the tag line — “for a country obsessed with mileage, we
produce the most fuel-efficient cars”. Just like the ‘rich man’ in the
ad, there are countless number of cell phone users (many of whom own
pretty fancy smart phones), who find SMSing an expensive mode of
communication. And they should, after all you can speak to one another
for as low as 1p per second, then why pay such a high price for a lowly
SMS, more so when you know that the phone companies are making a fast
buck on the SMS. Well now you have an alternative — WhatsApp.

WhatsApp
provides an alternative texting service that closely resembles standard
SMS text messaging. Simply put, WhatsApp messenger is a smart-phoneto-
smartphone messenger. I guess this is where I take the role of the
salesman trying to sell you the yacht (dont worry, your time will come
and you can ask kitna deti hai). Here are a few reasons why you should
install and use this app:

  • This app works on iPhone (IOS),
    Blackberry (Blackberry OS), Nokia (Symbian) as well as Samsung
    (Android). Arguably, that’s much better than the Blackberry Messenger
    (‘BBM’) which is limited to Blackberry devices.

  • Unlike standard
    text messaging, though, you can set a status message which other
    WhatsApp users can see, both in the Favourites page and in the main
    contact list.

  • And not only can you send photos, but you can also
    attach audio and video notes, and even your geographic location to
    WhatsApp messages. Plus, it provides an easy way to save your message
    history as a text file (see pic).

  • You could send a million
    messages, but pay a pittance. The messages can be sent to friends and
    family across the world (just like BBM) for the same cost.

  • The
    BBM requires you to know your friends’ PIN, well you can say goodbye to
    that now. Once you and your friends have installed WhatsApp, you don’t
    need anything else. This is actually one of the best parts — WhatsApp
    almost automatically identifies who all in your phonebook have installed
    WhatsApp and lets you chat with them instantly. In fact they will
    automatically appear in your Favourites.

  • WhatsApp gives you the
    option to remain on always/to remain connected with your buddies. If you
    choose to go offline, don’t worry the messages will be stored on the
    server and will be pushed to your phone as soon as you log on.

  • Messages
    are usually received very quickly and notifications appear via push,
    which you can configure in the phone’s settings if you want.

  • Like BBM it allows you to form groups (up to 10 people) where you can share messages with a group of friends.

  • Overall,
    WhatsApp Messenger is a huge benefit to the iPhone community and to
    smartphone users in general, because it lets you keep the text messages
    flowing to your friends for free . . . . . . arguably, for the same
    price that they cost cell phone providers to deliver. Come to think of
    it, you have nothing to lose but your expensive texting plans.

Well!!!!! Now’s the part where you ask kitna deti hai?

To
begin with, it will cost you US $1 (for the iPhone that is, for the BB,
Nokia and Samsung you can use it for free for one full year).

Unlike
standard SMS messaging, WhatsApp uses your phone’s data plan to send
and receive messages. So if you use the app a lot, then your data usage
will increase. (You can monitor these stats from within the app).
Similarly, if you travel outside of your phone carrier’s supported area,
it’s possible that you’ll incur data roaming charges if you leave that
option enabled. Staying attached to a Wi-Fi connection should alleviate
most of those concerns (but as a side effect, constant pinging to the
Wi-Fi network will drain your battery power very fast).

For
those of you who are extra security-conscious, you might be concerned
that your phone number is known to the app’s developer and that all
messages go through its servers. The privacy page on the WhatsApp
Website states that the company will Do No Evil with your data and the
developer lets you know that messages are stored on its system only
until they have been retrieved, at which point they are deleted.
WhatsApp also confirmed that WhatsApp text messages, like most e-mail
messages, are sent across the Internet unencrypted (contact data is
encrypted, however). That’s not necessarily a problem; just something
certain types of users may need to be aware of.

The only other
limitation is the requirement that your friends also have WhatsApp
Messenger app installed on their phones. However, if you’re the early
adopter within your circle and none of your friends have downloaded the
app yet, then you’re not going to have anyone to talk with. Luckily, the
app makes it easy to invite your friends to download the app, either by
sending them an e-mail or a standard text message.

If you liked
what you’ve read above and want to try this app, you can visit
(itunes/blackberry world/ OVI/android mart) and download this software.
The whole process is fairly simple. The app walks you through the quick
set-up process the first time you open it. You register your phone
number with the WhatsApp service. It verifies your identity by sending a
code (ironically, via a standard text message) that you then enter into
the set-up screen. After that, the app asks for permission to look
through your address book for contact numbers that are already
registered with WhatsApp and then places them into your your list of
Favourites. Then you’re finished and ready to start texting with your
friends. Once you and your friends have gone through this short
procedure, texting via WhatsApp Messenger is similar to standard SMS
messaging . . . . only much cheaper.

I would love to hear about your experience after using this software. You can send your emails to sam.client@gmail.com

Disclaimer:
This
write-up is not intended to promote or malign any particular product,
feature or any company. Further the write-up should not be considered as
an endorsement of any one product over the other. The sole purpose of
this write-up is to share knowledge and user experience.

levitra

The basics of cloud computing Part 2

fiogf49gjkf0d
About this article:
The previous write-up on this topic was intended to be an eye-opener on this subject. This one briefly discusses certain important aspects about cloud computing. This would include key terminology and the some offerings.

Background:
Cloud computing, as explained in the previous issue, is a model for enabling convenient, ondemand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. By providing on demand access to a shared pool of computing resources in a selfservice, dynamically scaled and metered manner, cloud computing offers compelling advantages in speed, agility and efficiency.

Moving on, one needs to appreciate that, currently, cloud computing is at an early stage of its life-cycle, and cloud computing as we know it, is the evolution and convergence of several trends. In order to benefit from the fast evolving model, one needs to understand certain important aspects and key terminology being used in the context of cloud computing.

Commonly used models of cloud computing:
The first in the order of things is for the readers to understand the different (common) cloud computing models available in the market. The models currently in vogue are:

  • Private clouds
  • Public clouds
  • Community clouds
  • Hybrid clouds

Private clouds:
These refer to clouds for exclusive use by a single organisation. Such clouds are typically controlled, managed and hosted in private data centers. However, this not a hard and fast rule, there are exceptions wherein the private cloud is for the exclusive use by one organisation, but the hosting and operation of the private clouds is outsourced to a third party service provider.

Public clouds:
These refer to clouds which are leased out for use by multiple organisations (tenants) on a shared basis. These clouds are hosted and managed by a third party service provider. These are fairly common and serve small and medium enterprises. Examples would be Microsoft 365, Google docs.

Community clouds:
These refer to clouds for use by a group of related organisations who wish to make use of a common cloud computing environment. For example, a community might consist of the different branches of the military, all the universities in a given region, or all the suppliers to a large manufacturer. To cite an example: Large Hadron Collider1. (Look this up on the Internet, you may find the facts and dynamics hard to believe.)

Hybrid clouds:
These refer to situations when a single organisation adopts both private and public clouds for a single application, in order to take advantage of the benefits of both. For example, in a ‘cloudbursting’ scenario, an organisation might run the steady-state workload of an application on a private cloud, but when a spike in workload occurs, such as at the end of the financial quarter or during the holiday season, they can burst out to use computing capacity from a public cloud, then return those resources to the public pool when they are no longer needed. (Somebody please wake up the Tax Department, please use this on due dates.)

Of the above, private clouds and public clouds are the most commonly seen and implemented.

Advantages:
While the advantages such as efficiency, availability, scalability and fast deployment are common to both public as well as private clouds, there are certain advantages which would be unique either to public clouds or to private clouds. Some of these are: Some benefits are unique to public cloud computing:

  • Low upfront costs — Public clouds are faster and cheaper to get started, hence providing the users with the advantage of a low-cost barrier to entry. There is no need to procure, instal and configure hardware.
  • Economies of scale — Large public clouds enjoy economies of scale in terms of equipment purchasing power and management efficiencies, and some may pass a portion of the savings onto customers.
  • Simpler to manage — Public clouds do not require IT to manage and administer, update, patch, etc. Users rely on the public cloud service provider instead of the IT department.
  • Operating expense — Public clouds are paid out of the operating expense budget, often times by the users’ line of business, not the IT department. Capital expense is avoided, which can be an advantage in some organisations.


Some benefits are unique to private cloud computing:

  • Greater control of security, compliance and quality of service — Private clouds enable IT to maintain control of security (prevent data loss, protect privacy), compliance (data handling policies, data retention, audit, regulations governing data location), and quality of service (since private clouds can optimise networks in ways that public clouds do not allow).
  • Easier integration — Applications running in private clouds are easier to integrate with other in-house applications, such as identity management systems.
  • Lower total costs — Private clouds may be cheaper over the long term compared to public clouds, since it is essentially owning versus renting. According to several analyses, the breakeven period is between two and three years.
  • Capital expense and operating expense — Private clouds are funded by a combination of capital expense (with depreciation) and operating expense.

Summarising:
To recap, cloud computing is characterised by real, new capabilities such as self-service, auto-scaling and chargeback, but is also based on many established technologies such as grid computing, virtualisation, SOA shared services and large-scale, systems management automation. The top two benefits of cloud computing are speed and cost. Through self-service access to an available pool of computing resources, users can be up and running in minutes instead of weeks or months. Making adjustments to computing capacity is also fast, thanks to elastically scalable grid architecture. And because cloud computing is pay-per-use, operates at high scale and is highly automated, the cost and efficiency of cloud computing is very compelling as well.

In the next write-up:
While cloud computing offers compelling benefits in terms of speed and cost, clouds also present serious concerns around security, compliance, quality of service and fit. There are a number of issues and concerns that are holding some organisations back from rushing to the cloud. The top concern far and away is security. While one can debate the relative security of public clouds versus in-house data centers, the bottom-line is that many organisations are not comfortable entrusting certain sensitive data to public clouds where they do not have full visibility and full control. So some particularly sensitive applications will remain in-house while others may take advantage of public clouds. Another concern is quality of service, since clouds may not be able to fully guarantee service level agreement in terms of performance and availability. A third area of concern is fit, the ability to integrate with in-house systems and adapt SaaS applications to the organisation’s business processes. Organisations are likely adopt a mix of public and private clouds. Some applications will be appropriate for public clouds, while others will say in private clouds, and some will not use either.

Until the next write-up . . . . Cheers!!!!!!

levitra

Tips and tricks — Securing your systems quick and easy

fiogf49gjkf0d
Introduction

Computers and computer networks are usually the heart and mind of any computer ecosystem, whether at your office or at your home. Generally, one tends to attach a lot more significance to the business ecosystem as compared to the ecosystem in one’s home and the common excuse is cost vs benefit analysis. Often the argument forwarded is that the data in the office is sensitive and therefore needs to be secured. This argument ignores the fact that the data at home is far more personal and any compromise there may well turn out to be a fatal error.

This article aims to give some quick easy, do it yourself tricks for securing your computer, wireless networks and your phone.

For those of you who missed it . . . . . . last month the BCAS had organised a free lecture meeting on ethical hacking. The speaker was Master Shantanu Gawade. A master not only because of the knowledge he possess on the subject of hacking, computer programming, etc., but also because he is a tender 14 years of age. Shantanu’s presentation evoked mixed reactions of shock and awe. Most of the members present were shocked by the potential threats that they had inadvertently exposed themselves to, and in awe because of the skills and knowledge displayed by a precocious boy of 14 years. Those who were able to comprehend the dangers that lay ahead asked — how do we deal with this menace, how do we insulate ourselves? Shantanu was candid enough to say that there are no silver bullets to this problem and that prevention was one of the best answers.

While it would be difficult to address every single issue, there are a few ‘do-it-yourself’ steps that you can take to reduce the threats. This write-up summarises the steps that you can take

  •  to check whether you have left WIFI network unsecured; and
  •  the steps to secure your WIFI network.

Those of you who were present during Shantanu’s presentation would instantly agree that the above would be good starting point.

How safe is your WIFI network:

A WIFI network provides several advantages (no wires and no ugly holes in your wall are just two of them1). A WIFI network allows a user to access the network without being tied to one particular spot. In other words, the user has the convenience of moving from his desk to another desk or conference room, etc. (at home- from your living room to any other room) and still be able to access the Internet or your server. WIFI signals can travel within the periphery (i.e., 360° of the periphery) of the router/ access point up to a particular range. You may say “it’s a huge convenience” and your neighbour might say “a huge convenience to me also”.

An unsecured connection allows neighbours and strangers access to your Internet connection and possibly your home network2. They could stream video over your connection, slowing down your own Internet access. If they have the skills, they may be able to search your hard drive for bank account numbers and other sensitive information. Even worse, they could download something illegal, such as hack some critical infrastructure, pornography, and make it look to the police as if you’re the guilty party. (You may recall that the cybercrime cell had traced some terror emails to the house of gullible citizens with an unsecured network — exploited by trouble-makers.)

So how do you prevent yourself from such threats. While switching off the network may be the easiest way, the proper solution would be to use WPA2 security. WPA2 offers considerably more than the older standards, WEP and WPA, both of which can be cracked in minutes. WPA2 can also be cracked, but if you set it up properly, cracking it will take more of the criminal’s time than anything on your network is worth. Unless of course hacking networks is the criminal’s bread and butter, sole purpose of the criminal’s existence.

Locking your WIFI network

Step 1 in this direction would be to check your router’s menus or manual to find out how to set up WPA2 protection. Once you have activated the settings the next step would be to lock down the same with a secure password.

If Step 1 fails, then to get started, you’ll need to log in to your router’s administrative console by typing the router’s IP address into your web browser’s address bar. Most routers use a common address like 192.168.1.1, but alternatives like 192.168.0.1 and 192.168.2.1 are also common. Check the manual that came with your router to determine the correct IP address; if you’ve lost your manual, you can usually find the appropriate IP address on the manufacturer’s website. Once you have find the appropriate IP address, first change the default password. Generally the default password is ‘admin’ or something similar provided by the manufacturer. Retaining the default password is very risky, because it is rumoured that there’s a public database containing default login credentials for more than 450 networking equipment vendors and there is a high probability that the hacker has already accessed it.

Though no password is foolproof, you can build a better password by combining numbers and letters into a complex and unique string. It is also important to change both your Wi-Fi password (the string that guests enter to access your network) and your router administrator password (the one you enter to log in to the administration console — the two may sometimes be the same) at regular intervals.

Step 2 is to change the Service Set ID (‘SSID’):

Every wireless network has a name, known as a Service Set ID (or SSID). The simple act of changing that name discourages serial hackers from targeting you, because wireless networks with default names like ‘linksys’ are likelier to lack custom passwords or encryption, and thus tend to attract opportunistic hackers. Don’t bother disabling SSID broadcasting; you might be able to ward off casual Wi-Fi leeches that way, but any hacker with a wireless spectrum scanner can find your SSID by listening in as your devices communicate with your router.

Step 3 is to enable the WAP 2 security:

If possible, always encrypt your network traffic using WPA2 encryption, which offers better security than the older WEP and WPA technologies. If you have to choose between multiple versions of WPA2 — such as WPA2 Personal and WPA2 Enterprise — always pick the setting most appropriate for your network. (Unless you’re setting up a large-scale business network with a RADIUS server, you’ll want to stick with WPA2 Personal encryption.)

Step 4 is to enable MAC filtering:

Running ipconfig will display your current network configuration. Every device that accesses the Internet have a Media Access Control (‘MAC’) address, which is a unique identifier composed of six pairs of alphanumeric characters. You can limit your network to accept only specific devices by turning on MAC filtering, which is also a great tip for optimising your wireless network. To determine the MAC address of any Windows PC do the following:

  •  open a command prompt (select Run from the Start menu), type cmd and press Enter (Windows 7 users can just type cmd in the Start Menu search box.)
  •  Next, at the command prompt, type ipconfig/all and press Enter to bring up your IP settings. If you’re using Mac OS X, open System Preferences and click Network.
  •  From there, select Wi-Fi from the list in the left-hand column (or Airport in Snow Leopard or earlier), click Advanced . . . in the lower left, and look for ‘Airport ID’ or ‘Wi-Fi ID’.
  • If you need to find the MAC address of a relatively limited device such as a printer or smartphone, check the item’s manual to determine where that data is listed.

Thankfully, most modern routers display a list of devices connected to your network along withtheir MAC address in the administrator console, to make it easier to identify your devices. If in doubt, refer to your router’s documentation for specific instructions.


Step 5 limit DHCP Leases to your devices:

Dynamic Host Configuration Protocol (DHCP) makes it easy for your network to manage how many devices can connect to your Wi-Fi network at any given time, by limiting the number of IP addresses your router can assign to devices on your network. Tally how many Wi-Fi-capable devices you have in your home; then find the DHCP settings page in your router administrator console, and update the number of ‘client leases’ available to the number of devices you own, plus one for guests. Reset your router, and you’re good to go.

Step 6 is Block WAN Requests:

This is the last step. Enable the Block WAN Requests option, to conceal your network from other Internet users. With this feature enabled, your router will not respond to IP requests by remote users, preventing them from gleaning potentially useful information about your network. The WAN is basically the Internet at large, and you want to block random people out there from initiating a conversation with your router.

Once you’ve taken these steps to secure your wire-less network, lock it down for good by disabling remote administration privileges through the administrator console. That forces anyone looking to modify your network settings to plug a PC directly into the wireless router, making it nearly impossible for hackers to mess with your settings and hijack your network. In case you find the above steps difficult to follow, please take the services of a professional and get it done before it’s too late.

Hope you have a safe computing experience. Cheers!

The basics of cloud computing

fiogf49gjkf0d
Part 3

About this article:
In the previous write-up on this issue we discussed certain important aspects about cloud computing including key terminology and some offerings. This write-up focusses on certain issues which would require consideration before one decides to opt for cloud services.

Background:
Cloud computing basically refers to providing the means through which everything i.e., from computing power to computing infrastructure, applications, business processes to personal collaboration, can be delivered to you as a service wherever and whenever you need. This model is fast emerging as the choice of several large and small businesses. The choice is quite natural considering the (assured) cost savings. Such savings can be either in the form of lower capital expenditure on hardware, software licence, infrastructure or in the form of lower operational expenditure i.e., operation and maintenance expense or reducing idle time, downtime, etc. (refer to the write-up titled Cloud computing basics — part I published in BCAJ April 2011 issue).

Businesses, large and small, have several options, between whether to opt for private or public or hybrid clouds (refer to the write-up titled Cloud computing basics — part II published in BCAJ May 2011 issue). Having several options itself, sometimes, becomes a hurdle while making strategic investments. Cloud computing also brings to the fore certain unique concerns, concerns which are more significant from the ‘enterprise’ point of view.

Primary objective of moving to the cloud:
As organisations evaluate how cloud computing can achieve these advantages, they are faced with numerous choices. While moving to the cloud has definite advantages such as — improving business agility, reducing management complexity and controlling costs, etc., one needs to appreciate that simply moving towards a service-oriented cloud computing model does not automatically deliver benefits. To derive maximum benefit and Return on Investment (ROI), cloud computing needs to be considered as part of a larger move towards more effective management and integration. Needless to say inadequate planning and half-baked cloud computing solutions may add complexities rather than reduce them.

Some myths and some clarifications:
While there are several myths and misconceptions associated with the topic of cloud computing, the ones that the readers of this journal are more likely to identify with are:

Myth 1:

Data security: Will the cloud service provider guarantee security?

One common concern amongst businesses looking to move to cloud computing is data security. Primarily, moving to the cloud entails — parking your data with the service provider, this can be a discomforting thought. The very possibility of threat to confidentiality and security of the data is the source of discomfort.

From a practical standpoint, public cloud datacentres are amongst the most secure premises on the planet. Yet, at the logical level, a cloud provider with every security certification still can’t guarantee the integrity of specific servers, applications, and networks, if your applications are poorly written, set up and secured. Similarly, all the security practices of a cloud provider are meaningless if a customer organisation’s security practices are weak.

The key take-away here is that there are several layers of security to protect your data, but there is always a possibility of chinks in the armour.

Myth 2:

Data control: My organisation will be locked into one vendor and lose control of its data, if it moves to the cloud:

Almost every organisation would acknowledge that businesses need to store shared files securely. This would assume more importance when the organisation is engaged in providing medical, legal and financial services. These organisations are subject to strict local laws.

If one believes that the best way to keep your data a secret is to manage it yourself, then the moot question would be why stash your precious data offsite?

It is relevant to point out that the essence of cloud services is ‘flexibility’. One application may call another on a different cloud service, and data may be stored anywhere, including your own network, but still be accessible to cloud applications. No cloud provider offers a service that completely takes control of your environment. The best cloud solutions will be a combination of on and offpremise services.

The key take-away is that while the service provider may control the infrastructure, the data is not entirely in his control. Sure!!! he controls your access to your own data, but his primary interest is merely optimal utilisation of his resources (just like you).

Myth 3:

Cost savings: An organisation must move all its applications to a cloud service to be able to benefit fully from cloud computing:

Moving an entire datacenter to the cloud is a tall task. Practically, no cloud provider would recommend this, at least not at one go (if you ask me — you are inviting trouble). Ideally, one should adopt a step-by-step approach. One should start by identifying applications in his pipeline that can benefit his organisation by being in the cloud. Look for applications where resources are used intensely for a short period each month then left idle for the rest of the time, or applications where a moderate level of resources are used continuously, but experience ‘periods’ of very high activity.

Such applications are ideal cloud candidates. This is so, because the cloud can scale up and down resources on demand. The cloud is built for flexible access to resources that can be allocated to other applications, or even other customers, when idle.

The key take-away is that one should do a cost benefit analysis of all activities undertaken and gauge the advantages/disadvantages of shifting to the cloud. A proper evaluation would also ensure that you minimise disruptions and costs associated thereto. Who knows, the sum of all parts may be greater than the whole.

Myth 4:

IT role changes: Do I still need an IT administrator?

The role of the Exchange Administrator does not become obsolete due to the cloud. There are still many tasks that remain on-premise. You still have to manage your users and their mailboxes. Industry-specific data retention compliance, as well as implementing custom workflows, is still your responsibility. While some tasks may no longer reside on-premise, managed cloud services free up your time to engage in more strategic roles, providing you with new opportunities.

Apart from patching those servers and physically maintaining them, all other aspects of managing applications remain in the IT administrator’s hands. Monitoring, updating, integration with services such as Active Directory, security and network monitoring — these task are still required within organisations utilising cloud services.

The key take-away is that the all powerful IT administrator’s role is impacted, but the role does not get diminished. The IT administrator’s role will evolve as the availability of compute cycles and networked storage increase — that is a given, just as the IT role has evolved in the past. The question IT administrators must ask themselves is, ‘Am I prepared to play a more strategic role in my organisation?’

Myth 5:

Getting started: All you need is your credit card to start cloud computing:

You can begin using cloud computing services with just a credit card. This is a good way to get experience with this new frontier of services, in fact some of the basic services may be available free. Most cloud services provide an environment designed for getting started and developing applications.

It is important that one gets used to the concept and gain comfort. Post this one may evaluate the advantages/disadvantages of moving to the cloud.

The key take-away is take small, measured steps. Learn from experience before betting it all.

In summary:

There are pros and there are cons, also there are hyped stories of success and spiced-up stories of failure. Readers may well be cautioned to do their own research and allay their fears of this emerging service. While the service provider may promise you the moon, one should pare his expectations and make investments only upon realising measured benefits. In short, look before you leap!!!!!!!

Cyber warfare — the next level

fiogf49gjkf0d
About this write-up

This write-up is about a new type of worm/malware, which was in the news recently. The worm called Flamer attracted a lot of hype and media attention given the speculation regarding its likely impact. This write-up is an attempt to cull out some key takeaways for benefit of the readers.

Background

Cyberspace is no longer a benign place to surf. Viruses are getting increasingly nasty and complex over the years. But while worms were traditionally being used by hackers and cybercriminals either to display their prowess or steal information and money, it appears now that even nation–states are backing such crimes to target countries – a trend popularly known as cyber espionage and cyber warfare.

Cyber warfare – the next level – Flamer worm

Circa 2010, news reports started appearing about a new type of a worm i.e., Stuxnet1. What was different about this worm was that it was the first of its kind i.e., the level of complexity, its apparent motive and the intended victims were not the ‘usual’ businesses or gullible individuals. On the contrary, experts believed that this was a ‘first’ – a worm written by a sovereign nation with the sole purpose of disrupting infrastructure facilities in another territory. It was also a ‘first’ because the worm was no longer attacking the zeros and ones (computer code), this time it was attacking the devices that were controlled by these zeros and ones – with a view to disrupt their functionality. There was the nagging feeling . . . . . . the type you get when somebody really bad/capable of doing nasty thing says . . . . I’ll be back (like Arnold Schwarzenegger in Terminator). It was (painfully) obvious that Stuxnet wasn’t the last word on the topic and things were likely to heat up . . . . very soon . . . . Coming back to the present day, that nagging feeling has become a reality – Stuxnet appeared in 2010, Duqu surfaced in 2011. Sometime around May 20122, security experts started issuing warnings about the ‘Flamer’ worm aka W32. Flamer or sKyWIper.

Threat assessment

A senior analyst at a leading security firm, sharing his view on the subject reveals that this is the most sophisticated threat he has ever seen. The same security firm had undertaken a detailed analysis of the ground-breaking Stuxnet virus, which ‘purportedly’ targeted Iran’s nuclear enrichment facilities two years ago, sending some of their centrifuges spinning out of control. The preliminary results shared by the senior analyst suggested that Flamer appeared to be even more complex than Stuxnet, and that it was an incredibly clever, comprehensive ‘spying programme’.

Grapevine reports suggest, “Flamer is a backdoor worm that goes looking for very specific information. It scrapes a mass of information from any infected machine and then sends it, without the user having any idea what is going on. The amount of information it can send is huge”.

Components identified3

A number of components of the threat have been retrieved and are currently being analysed. Several of the components have been written in such a way that they do not appear overtly malicious. Some of the components identified as malicious are:
• advnetcfg.ocx (0.6MB) (backdoor component)
• ccalc32.sys (RCA Encrypted Config file)
• mssecmgr.sys (6MB) (main compression component, LUA interpreter, SHH, SQL library)
• msglu32.ocx (1.6 MB) (Steals data from images and documents
• boot32drv.sys (~1kb) (Config file)
• nteps32.ocx (0.8MB) (performs screen capture)

This time it is different The one thing that everyone is sure about is that Stuxnet, Duqu and Flamer are definitely in another class than your typical spyware or fake antivirus threat. Experts universally agree that this complex software required a coding team and could not be achieved by a lone wolf coder. The complexity of the task has led many to presume only a nation-state would have the resources. Just as is being speculated in case of Stuxnet. It is interesting to note that unlike Duqu, Stuxnet and Flamer have the ability to infect systems via USB key, thus allowing them entry into facilities that are isolated from the Internet. They also use the same printer-driver vulnerability to spread within the local network. While all three worms are similar in the sense that all three are seriously modular (i.e., in a way that lets their command and control servers add or update functionality at any time), Flamer is definitely a step up.

  • Here is why: According to Kaspersky researchers, a Stuxnet infestation takes just 500KB of space, as against this, Flamer is an out-and-out giant at 20MB. Part of Flamer’s size involves the use of many thirdparty code libraries, prefab modules that handle tasks like managing databases and interpreting script code. Neither Stuxnet nor Duqu rely on third-party modules.

  • Given its size, Flamer is smart enough to mask its download impact. It is downloaded in multiple sessions. This is done to avoid giving itself away. In this respect, it is far more intelligent than its predecessors.

  • Stuxnet and Duqu used stolen digital signatures to fool antivirus softwares. Unlike these, Flamer doesn’t use a digital signature. Instead, Flamer uses some unique techniques for self-protection, chief among them is the ability to recognize over 100 antivirus installations and modify its behaviour accordingly. It uses five different encryption methods, three different compression techniques, at least five different file formats (and some proprietary formats too) and special code injection techniques.

  • Although Flamer is not concealed by a rootkit, it uses a series of tricks to stay hidden and stealthily export stolen data. One of its most amazing capabilities is the creation of a file on the USB stick simply named ‘.’ (dot). Even if the short name for this file is HUB001.DAT, the long name is set to ‘.’, which is interpreted by Windows as the current directory. This makes the OS unable to read the contents of the file or even display it. A closer look inside the file reveals that it is encrypted with a substitution algorithm.
  • Flamer is definitely complex. In one of the earlier reports on this threat, a security expert noted that it has at least 20 modules, most of which are still being investigated. Another expert remarked that one of its smaller modules is over 70,000 lines of C decompiled code and contains over 170 encrypted strings. As for what it does, you might better ask what doesn’t it do. Just about any kind of espionage you can imagine is handled by one of Flamer’s modules.

 

  • Flamer has very advanced functionality to steal information and to propagate. Using this toolkit, multiple exploits and propagation methods can be freely configured by the attackers. Information gathering from a large network of infected computers was never crafted as carefully as has been done in Flamer.

  • Stuxnet relied on an unprecedented four zero-day attacks to penetrate systems and Duqu managed with just one zero-day attack. Flamer didn’t use any zero-day attacks.
  •     Stuxnet and Duqu infestations automatically self-destructed after a set time; Flamer can self-destruct, but only upon receiving the auto-destruct code from its masters.

It’s worth noting that Flamer doesn’t necessarily do any of the things described above, not even replicate to other systems, unless it’s told to do so by its Command and Control servers. This combined with the fact that it uses many standard commercial modules has helped it get past behaviour and reputation-based detection systems (i.e., our commonly used antivirus systems).

It’s a live program that communicates back to its master. It asks, where should I go? What should I do now?

Experts say that Flamer is most likely capable to use all of the computers’ functionalities for its goals. It covers all major possibilities to gather intelligence, including keyboard, screen, micro-phone, storage devices, network, wifi, Bluetooth, USB and system processes.

To state simply, once a system is infected, Flamer begins a complex set of operations, including sniffing the network traffic, taking screenshots, recording audio conversations, intercepting the keyboard, and so on so forth.

Sounds just like a cold war (fiction) scenario — where highly trained, deep cover ‘sleeper’ agents were inserted deep inside enemy territory to attack the enemy from within. Takes me back to some of my favourite movies……..Salt, Killers, The impossible spy…….

Readers who are interested in more technical information may also look up the following:

  • http://www.symantec.com/security_respons/writeup.jsp?docid=2012-053007-0702-99&om_ rssid=sr-mixed30days

  •     http://blogs.mcafee.com/mcafee-labs/jumping-in-to-the-flames-of-skywiper

  •     http://www.mcafee.com/threat-intelligence/mal-ware/default.aspx?id=1195098

  •     h t t p : / / w w w . f – s e c u r e . c o m / w e b l o g / archives/00002371.html
  • http://www.kaspersky.com/about/news/virus/2012/Kaspersky_Lab_and_ITU_Research_ Reveals_New_Advanced_Cyber_Threat

  •     http://www.mcafee.com/us/about/skywiper. aspx

  •     http://www.crysys.hu/skywiper/skywiper.pdf4

It would be a cliché to say, that this is not the last we have heard about this worm or that cyber warfare is now gaining momentum and therefore expect to read and hear more on this topic.

 1.    Read Cyber warfare the next level BCAJ October 2010

 2.    Unconfirmed reports suggest
Flamer was first reported as early as 2007

 3.    Source: www.symantec.com

High-Frequency Trading

High-Frequency Trading (‘HFT’) has been around for many years now. In spite of this, very little is known about HFT. Ever since the beginning, people in general have either sung praises or spoken of the dark side of HFT. The purpose of this article, however, is not to dwell on the merits or demerits of HFT. Instead, this article is to depict how technology is used in this trade and the basic mechanics of HFT. The technical content has been kept at a bare minimum and logical/practical aspects have highlighted wherever possible.

Background

Once upon a time trading in stocks, securities, commodities, etc. was done on the ‘exchange floor’. Back then, ‘trading’ was a fairly straight-forward affair. Buyers and sellers gathered on exchange floors and heckled with each other until they struck a deal. Those were the heady days of power, pressure and sentiments. However, trading on the exchange floor had its own limitations and the trading practices were plagued with malpractice.

In case you have never had the chance to see how trading took place in the olden days or experience it, check these movies — English movies — Trading Places, Wall Street, Hindi movie — Guru.

By mid-nineties, computers and technology started gaining prominence. The ability of a computerised system, to flawlessly execute transactions, match buy and sell orders, etc., was growing exponentially. Then, in 1998, the Securities and Exchange Commission authorised electronic exchanges to compete with marketplaces like the New York Stock Exchange. The basic intent was to open markets to anyone with a desktop computer and a fresh idea. This objective was achieved largely.

Apparently, (as per data published by NYSE and other public sources) between 2005 and 2009 the trading volume (on the NYSE) grew about 164%. News reports have credited HFT for a large part of this meteoric rise. As a matter of fact, there are some who say that in the United States (US), while high-frequency trading firms represent 2% of the approximately 20,000 firms operating, they account for 73% of all equity orders volume. Currently, it is estimated that HFT trades account for 56% of all equity order volumes in the US, 38% of trades in Europe and 5-10% of trades executed in Asia.

Making money out of thin air

HFT became most popular when exchanges began to offer incentives for companies to add liquidity to the market. For instance, some exchanges have a group of liquidity providers called supplemental liquidly providers (SLPs), which attempt to add competition and liquidity for existing quotes on the exchange. As an incentive to the firm, the exchange pays a fee1 or rebate for providing the said liquidity. Rumour has it that the SLP was introduced following the collapse of Lehman Brothers in 2008, when liquidity was a major concern for investors.

High-frequency traders also benefit from competition among the various exchanges, which pay small fees that are often collected by the biggest and most active traders — typically a quarter of a cent per share to whoever arrives first. Those small payments, spread over millions of shares, help high-speed investors profit simply by trading enormous numbers of shares, even if they buy or sell at a modest loss.

HFT made simple

HFT is a program trading platform that uses powerful computers to transact a large number of orders at very fast speeds. HFT uses complex algorithms2 to analyse multiple markets and execute orders based on market conditions. Typically, the traders with the fastest execution speeds will be more profitable than traders with slower execution speeds.

Powerful algorithms — ‘algos,’ in industry parlance — execute millions of orders a second and scan dozens of public and private market-places simultaneously. They can spot trends before other investors can blink, changing orders and strategies within milliseconds.

Basic mechanics

The mechanics of such systems coupled with complex algorithms are not standardised. Conceptually, the design may be broken down as follows:

  •     The data stream unit i.e., the part of the systems that receives data e.g., quotes, news, etc., from external sources.

  •     The decision or strategy unit

  •     The execution unit.

These systems are very intelligent and make use of social networks, scanning or screening technologies to read posts of users and extract human sentiment which may influence the trading strategies.

Characteristics of a HFT system

HFT can be characterised as under:

  •     It uses computerised algorithms to analyse incoming market data and implement trading strategies;

  •     HFT trading strategies are for investment horizons of less than one day. The primary game plan is to unwind all positions before the end of each trading day. An investment position is held only for very brief periods of time i.e., from seconds to hours. The system rapidly trades into and out of those positions, sometimes thousands or tens of thousands of times a day;

  •     At the end of a trading day there is no net investment position. Since they must finish the day flat, HFTs exhibit balanced bi-directional (i.e., ‘two-way’) flow. It is argued that due to this feature HFTs can’t accumulate large positions.

  •     HFTs can’t deploy large amounts of capital, infact, HFTs have little need for outside capital or leverage, and tend to be proprietary traders. In theory, HFTs can’t ‘blow up’ (they don’t use much leverage, and don’t have much capital, so they can’t lose much capital!);

  •     Generally employed by proprietary firms or on proprietary trading desks in larger, diversified firms;

  •     It is very sensitive to the processing speed of markets and of the traders own access to the market;

  •     Positions are taken in equities, options, futures, ETFs, currencies, and other financial instruments that can be traded electronically;

  •     High-frequency traders compete on a basis of speed with other high-frequency traders, not (supposedly) the long-term investors (who typically look for opportunities over a period of weeks, months, or years), and compete for very small, consistent profits;

  •     HFT is a very low-margin (low-risk, low-reward) activity;

  •     Theoretically speaking, HFTs follow a Gaussian (Normal) distribution. Their logic is simple i.e., large expected returns are rare and tiny expected returns are abundant;

  •     For the HFTs, opportunities are short-lived because they are very small and they are heavily competed for;

  •     Economics of HFT requires identification of large quantities of trading signals, which is highly technology-intensive. Success or failure in this case is determined by the HFTs speed i.e., speed in capturing opportunities before they are accessed by competitors.

Standard HFT strategies

Most high-frequency trading strategies fall within one of the following trading strategies:

  •     Market making: involves placing a limit order to sell (or offer) or a buy limit order (or bid) in order to earn the bid-ask spread. By doing so, market makers provide counterpart to incoming market orders;

  •     Ticker tape trading: much information happens to be unwittingly embedded in market data, such as quotes and volumes. By observing a flow of quotes, high-frequency trading machines are capable of extracting information that has not yet crossed the news screens;

  •     Event arbitrage: certain recurring events generate predictable short-term response in a selected set of securities, HFTs take advantage of such predictability to generate short-term profits;

  •     High-frequency statistical arbitrage: this strategy requires the HFT to exploit predictable temporary deviations from stable statistical relationships among securities.

HFT the dark side

High-frequency traders often confound other investors by issuing and then cancelling orders almost simultaneously. Loopholes in market rules give high-speed investors an early glance at how others are trading. And their computers can essentially bully slower investors into giving up profits — and then disappear before anyone even knows they were there.

HFT came into spotlight about two years ago when a very large Wall Street firm sued one of their former employees for stealing code that was used in one of their programs used to execute this type of trade. When the former employee (programmer) was accused of stealing secret computer codes/software — that a Government prosecutors said could ‘manipulate markets in unfair ways’ — it only added to the mystery be-cause the Wall Street firm acknowledges that it profits from high-frequency trading, but disputes that it has an unfair advantage.

It is rumored that in May 2010 — a flash crash took place in the Dow in which several companies and blue chips lost a lot of their value in a matter of minutes, and the New York Times reported that shares of big companies like P&G and Accenture saw ridiculous prices like a penny or a $100,000. The prices were later restored to more usual levels.

Even in India — BSE cancelled all the futures traded on in one of the trading last year, and at least an initial report blamed an algo trader from Delhi for causing havoc because of their trades.

In spite of the fact that HFT has been around for more than a decade, even today, very little is known about HFT and Algorithmic trading. Only recently regulators like the SEC and SEBI has started asking some questions. In fact, if the readers are interested they may look up the recent guidelines issued by SEBI on this issue. SEBI’s endeavour is to contain possibilities of systematic risk caused by the use of sophisticated automated software by brokers.

There are several questions like how do these programs work, what are the triggers, is there a risk and do these programs provide an undue/ unfair advantage to the user. Only time will tell.

Disclaimer:

This article is only intended to create awareness about HFT. The contents of this article are based on various stories, articles, research papers, etc. currently available in the public domain. The purpose of this article is neither to promote, nor malign any person or a company mentioned in the article.

Microsoft Office 2013

fiogf49gjkf0d
About this write-up
MS Office is a popular application software and enjoys wide usage across the world. Recently Microsoft released the Customer Preview of the latest version of its Office suite i.e., Microsoft Office 2013 (a.k.a Office 15). This write-up briefly discusses some of the new features proposed to be introduced in the new software, product enhancements to existing features, and some pros & cons associated therewith.

In my last write-up I had mentioned that developments and product announcements/launches were happening in such quick succession, that hardly a day passes by and a new product is launched. As a consequence, products are becoming out of fashion (in relative terms) almost immediately after launch.

When I was penning my previous write-up, I chose to write about the Flamer worm instead of writing about Samsung Galaxy S-III, iOS 6 and Microsoft Surface) . . . . don’t ask me why. Anyhow, I had ended the write up with the note that the next write-up would be about Samsung Galaxy S-III. In all honesty, I was all set to keep this commitment and suddenly out of the blue I read about Microsoft’s latest. All of a sudden it felt like Galaxy S-III had already become ‘old news’ and I had to write about the latest offering (announcement for now) from Microsoft. And so . . . . here we are . . . .

Background

Microsoft Office 2013 (a.k.a Office 15) is a productivity suite from Microsoft Windows and is likely to succeed the hugely popular Microsoft Office 2010. A developmental version (build 2703.1000) was leaked in May 2011. Subsequently, in January 2012, Microsoft released a technical preview of Office 15 (build 3612.1010). Almost six months later, on 16 July 2012 (to be precise) Microsoft unveiled the Customer Preview.

In this write-up, I have tried to highlight some of the new features proposed to be introduced in the new software, product enhancements to existing features, and some pros & cons associated therewith.

Whats new in Office 15

 While there are several features that one can describe, here are a few features that I found exciting:

  • Cloud integration
  • Will respond to touch, Stylus and the good ol’ keyboard
  • The new ‘Metro’ look
  •  Edit PDFs in Word 2013
  •  Will support Open Document Format (‘ODF’) 1.2
  •  Sharing, embedding web elements like YouTube videos
  •  Social media integration — skype, flicker
  •  Enhancements in Excel, Word, Outlook, One Note.

Some of the things that might not excite a few people:

  • Will have to upgrade from Windows XP/Vista
  • Get used to SkyDrive cloud storage.

Cloud integration

Cloud integration is now becoming a de facto ‘must have feature’. Cloud storage has been around for a quite some time (X drive types). Without getting into ‘who started it all’, Google’s chrome OS was a serious attempt to move towards cloud integration. If you recall, the Chrome OS was touted as one of slimmest OS because it required very little time to boot and Google had famously said that there was no need for providing any apps within the OS because everything was on the internet and that most people only boot their PCs and log on to the net — hence all the apps would be on the net. Last year when Apple unveiled its latest offering, it also announced a new service iCloud (5 GB storage). Gone were those days when you need to synchronise your PCs at different locations, no need to carry data in a portable drive or disc. With Office 15, Microsoft too has joined the gang. SkyDrive is default storage location for all your files (effectively SkyDrive is expected local C drive). Subscribers will be given 20GB storage space.

With this version, the Microsoft is moving to a subscription-based model wherein your Office files are tied to your Microsoft ID. Once you sign up, you can download the various desktop apps to a certain number of devices and, as with Windows 8, your settings, SkyDrive files and even the place where you left off in a document will follow you from device to device. Office 365, which is currently being sold to businesses, will be available to home-users as well.

 In addition to receiving future Office upgrades automatically, subscribers will get additional Sky- Drive storage, multiple installs for several users, and added perks such as international calls via Skype. You’ll also be able to stream Office apps to an Internet-connected Windows PC.

Responds to touch, stylus also

The preview page says “Office 15 will take you beyond the mouse and keyboard — to embrace touch and pen input” (one can hope for a much better experience while using One note). While multi-touch laptops aren’t — and probably won’t be — a mainstream choice for business and homeusers anytime soon, touch is an essential component of smartphones and tablets, obviously. The pen may be making a comeback too, judging by the popularity of Samsung’s stylus-equipped Galaxy Note. Office 2013 will allow you to swipe a finger across the screen to turn a page; pinch and zoom to read documents; and write with a finger or stylus — just like you do on your smart phone or tab. Additionally, when you write an email by hand, Office 2013 will automatically convert it to text. The user interface has been modified (especially the Ribbon feature — its flattened up or as Microsoft likes to call it ‘Metrified’). While this may seem a bit odd when you see it on a desktop, but you may appreciate it more when you try using it on a tablet PC or on your smart phone.

The new ‘Metro’ look

Microsoft loves Metro user interface, which was first introduced in Windows Phone 7 around two years ago. Since then Metro has become the user interface of future for Microsoft and the company is putting it in all its products. Office 2013 too has been given a Metro makeover. It is a slick interface, with clean lines, lots of empty space and looks modern.

For the uninitiated

Metro is an internal code name for a typographybased design language created by Microsoft. Originally meant for use in Windows Phone 7. Early uses of the Metro principles began as early as Microsoft Encarta 95 and MSN 2.0. Later on, these principles evolved into Windows Media Center and Zune. Now they are included in Windows Phone, Microsoft’s website, the Xbox 360 dashboard update, and Windows 8. A key design principle of Metro is better focus on the content of applications, relying more on typography and less on graphics (‘content before chrome’). WinJS is a JavaScript library by Microsoft for developing Metro applications with HTML.

There are two aspects to the design changes introduced in Office 2013 — visual changes and usability changes. Microsoft thinks that there is no need for any faux chrome or aero fluff around windows. Hence, the interface has been ‘Metrified’ (that’s how Microsoft likes to say it). The icons have been flattened, things have been cleaned up (i.e., the heavy boundaries, bevelled edges, shadows, etc. . . . . all gone.

 In fact, icons are likely to be a thing of the past. Under Metro there will be hardly any need for icons. While some argue that icons were simple (graphic, easy to remember) indicators for tools like copy, paste, etc., they kinda spruced things up. Microsoft argues that when you have as many as 4000 of such icons it eats away most of your display area.

Microsoft justifies the Metrification by saying that Office 15 is likely to used and seen on screens of different shapes and sizes, consider the screen of a typical smart phone…… would you rather see the screen or the numerous icons. Duh!!! It’s about getting the content front and centre and trying to get the application content out of the way — there when you need it, but out of the way when you don’t. On tablets and smart phones, you want to put the application stuff to one side.

Microsoft thinks that once you get the hang of it, you will appreciate the thought process.

Edit PDF documents in Word 2013

Until now you could only ‘save’ office files in PDF format. To edit these files or other PDF files, either you would have to edit the original office file and then (again) save as PDF or you had to buy third-party software/utilities. Going forward, you will be able to open PDF files and edit them in MS Word 2013 and then save them as word files or as PDF.

Word 2013 will maintain the formatting such as headers, columns, and footnotes and elements such as tables and graphics, of the PDF and permit you to edit them as though they were created in Office 2013 itself.

Users feedback suggests that Office 2013 handled simpler PDF files with ease. But it was not so graceful with the complex ones that had many images and elements.

Will support Open Document Format (‘ODF’) 1.2

Microsoft fought ODF1 as it became an open international standard (ISO/IEC 26300) by creat-ing its own standard OOXML (ISO/IEC 29500) and pushing it through standards organisations. But Microsoft has now apparently accepted that ODF has widespread support with other vendors, governments and organisations.

Microsoft already supports ODF 1.1 in Office 2007 SP1, Office 365, SharePoint and SkyDrive WebApps. Now Office 2013 will support ODF 1.2.

ODF 1.2 has already been widely adopted and is supported by, along with others such as Gnumeric, Google Docs, Zoho Office and AbiWord.

Sharing, embedding web elements like YouTube videos & social media integration — Skype, flicker

Office 2013 uses Sky Drive to enable better sharing of documents. You can invite people to work on to the document or use PowerPoint to give a presentation on the web. Word files can also be published as blogs on several popular blogging services directly from Office 2013.

YouTube videos can be now embedded into the documents directly and users don’t have to save these clips to the local computer. Office 2013 also includes Flickr integration that allows users to search for photographs on the popular photo sharing websites and embed pictures using Office 2013.

Microsoft acquired Skype last year, and Office 2013 will be the first suite to incorporate the popular VoIP service. You can integrate Skype contacts with Microsoft’s enterprise-oriented Lync communications platform for calling and instant messaging. Office subscribers get 60 minutes of Skype international calls each month.

User feedback suggests that there’s room for improvement, though.

Big Data – II

fiogf49gjkf0d
About this Article

This article is
part 2 of the series on Big Data. This article briefly deals with issues
such as, why Big Data is gaining so much importance and what are the
recent trends in Big Data collection and analysis. The write up also
discusses some of the technologies being used for Big Data analysis.

The
previous write up briefly touched upon what is Big Data and some
background on the vital role played by it. This write up will delve a
little further and deal with some of the trends and developments in this
arena.

Background:
Big Data, as discussed earlier,
is all about collecting, storing, analysing and using the results for
betterment (one sincerely hopes so). It is typically characterised by
features such as volume, velocity, variety and veracity. While Big Data
is not entirely a recent development, but the manner in which data is
gathered, the sources of information, techniques for storage and
technologies for analysis, have evolved significantly in recent times.

Big Data is for Everyone:
Generally
speaking, most people believe that Big Data is for large corporations
and businesses or for the Government. But the truth is, whether you’re a
5 person shop or part of the Fortune 500, you can have Big Data and it
can help you to grow and become profitable. Today, if one wants to
remain competitive, he has to analyse both internal and external data,
as quickly and cost effectively as possible. This (rule) applies equally
to all types of organisations, big or small, giants or dwarfs.

Right
now, you may be asking how will Big Data help me to find the
opportunities by analysing new sources of data? Here is one small
example:

As the world becomes more instrumented, with RFID tags,
sensors and other sources, we are creating more and more data. When
paired with external data – like that generated by social media sites –
there’s incredible opportunity that is largely untapped and unanalysed.
This is where Big Data analysis comes into the picture. Every day,
companies of all sizes “cut through the noise” created by so much data
to find valuable insights.

Not just businesses and commercial
organizations, Big Data analysis can be applied to the social sector
too. Using the same techniques and tools (i.e. used for developing
marketing and risk management tools), Big Data analysis when applied to
the social sector, has the potential to revolutionise the functioning of
those sectors. For instance, imagine the advantages of using Big Data
analysis in:

  •  the public sector;

  •  the healthcare sector; or

  •  (to put it more generally,) mainly those sectors where an ethos of treating all citizens in the same way is kind of expected.

Advantages would traverse beyond commercials to the realm of mass social betterment.

How Big Data is Used
:
Big data allows organisations to create highly specific segmentations
and to tailor products and services precisely to meet those needs.

Consumer
goods and service companies that have used segmentation for many years
are beginning to deploy ever more sophisticated Big Data techniques,
such as the real-time micro-segmentation of customers to target
promotions and advertising. As they create and store more transactional
data in digital form, organisations can collect more accurate and
detailed performance data, in real or near real time, on everything from
product inventories to personnel sick days. Information Technology is
used to instrument processes and then set up controlled experiments.

Data
generated therefrom is used to understand the root causes of the
results, thus enabling leaders to make decisions and implement change.

Big Data technologies:
Some of the key Big Data technologies which are in play are described below:

  •  Cassandra: Cassandra is an open source (free) database management
    system, designed to handle huge amounts of data on a distributed system.
    This system was originally developed at Facebook and is now managed as a
    project of the Apache Software foundation.

  •  Dynamo: Is a proprietary software developed by Amazon.

  •  Hadoop: Is an open source software framework for processing huge
    datasets on certain kinds of problems on a distributed system. Its
    development was inspired by Google’s MapReduce and Google File System.
    It was originally developed at Yahoo! and is now managed as a project of
    the Apache Software Foundation.

  •  R: “R” is an open source
    programming language and software environment for statistical computing
    and graphics. The R language has become a de facto standard among
    statisticians for developing statistical software and is widely used for
    statistical software development and data analysis. R is part of the
    GNU Project, a collaboration that supports open source projects.

  •  HBase: Is an open source (free), distributed, non-relational database
    modeled on Google’s Big Table. It was originally developed by Powerset
    and is now managed as a project of the Apache Software foundation as
    part of the Hadoop.

  •  MapReduce: A software framework introduced
    by Google for processing huge datasets on certain kinds of problems on a
    distributed system.32. This too has been implemented in Hadoop.

  •  Stream processing: Also known as event stream processing. This refers
    to technologies designed to process large real-time streams of event
    data. Stream processing enables applications such as algorithmic trading
    in financial services, RFID event processing applications, fraud
    detection, process monitoring, and location-based services in
    telecommunications.

  •  Visualisation: This refers to technologies
    used for creating images, diagrams, or animations to communicate a
    message that are often used to synthesise the results of big data
    analyses. Some of the instances of visualisation are: Tag clouds,
    Clustergram, History flow, etc.

Myths surrounding Big Data:
While
there are many myths surrounding Big Data, for the purpose of this
write up, I have briefly summarised few myths commonly associated with
Big Data. These are:

Big Data is only about massive volumes of data:

As
discussed in part 1, volume is only one of the factors. Generally, the
industry considers petabytes of data as a starting point. However, it is
only a starting point, there are other aspects such as velocity,
variety and veracity to deal with.

Big Data means unstructured data:

While
variety is an important characteristic, it should be understood in
terms of format in which the data is gathered and stored. Many people
have a mistaken belief that the data would be in an unstructured format.
As a matter of fact, the term “unstructured” is misleading to a certain
extent. This is because, one doesn’t take in to account the many
varying and subtle structures typically associated with Big Data types.
Candidly, many industry insiders admit that Big Data may well have
different data types within the same set that do not contain the same
structure. Some suggest that the better way to describe Big Data would
be to term it “multi structured”.

Big Data is a silver bullet type solution:

This
is an avoidable pitfall. Most businesses have a tendency to believe
that Big Data is a silver bullet to their growth strategy. The
applications available only offer one of the means to analyse data.
Application of the learnings from the analysis is altogether a different
thing. What needs to be understood is that, Big Data is only a means to
the end and not the end itself.

What to expect in future:

  •    Big Data will be an important driver of business activities in the future. Almost all businesses will leverage the insights from Big Data based research to hone in their strategy. Be it innovations, competition or value addition, Big Data’s contribution will be significant.
  •     The impact of Big Data will span across sectors. Among these health sciences and natural sciences are likely to have a positive impact on the larger society.

  •     One can expect that the sources of data and volume of data itself will grow exponentially. Consequently, the data integration process will become more efficient.

  •     There will be a demand for talented personnel. Notable demand will not be restricted to personnel possessing the requisite skill for collecting and analysing Big Data. The need will be for personnel who know how to use the results of Big Data analysis in effective decision making.

  •     Decision making as we know it (and put in practice) today, is likely to undergo a drastic change. Sophisticated analytics can substantially improve decision making, minimise risks, and unearth valuable insights that would otherwise remain hidden.
  •    We are likely to see a sea of change in the regulatory environment, mainly related to privacy, intellectual property rights and public liability.

Well, this concludes the part 2 of the write up on Big Data. In my next write up I intend to deal with “the (ab)use of social media”. I intend to cover some (disturbing) trends that have caught the attention of many. Its still a thought, but the idea is fresh.

Disclaimer: The information/factual data provided in the above write up is based on several news reports, articles, etc., available in the public domain. The purpose of this write up is not to promote or malign any person or company or entity, the purpose is merely to create an awareness and share the knowledge that is already available in the public domain.

Crowdsourcing

fiogf49gjkf0d
About this article:
This write-up is (in a manner of speaking) a continuation of the previous write-up on mass collaboration. The basic idea remains the same: there is a large problem, capable of being broken into several small manageable parts. The task, though simple to humans, is difficult for computers to achieve (as yet). This idea is applied differently to achieve a variety of objectives. Some are commercial and then there are others which contribute to the growth of society as a whole.

Background:
The term ‘crowdsourcing’ as you may have already guessed, is a derivative of the words ‘crowd’ and ‘sourcing’. While this phrase was first coined by Jeff Howe in June 2006 Wired magazine article, you may be surprised to know that this concept was being commonly applied for several years before that. Few examples which have become huge:

  • Wikipedia
  • Captcha and recaptcha

Some lesser known examples:

  • Brooke Bond/Lipton runs a slogan contest, the winner of the slogan gets a cash reward (and Brook Bond gets 1000+ new catchy slogans for future marketing — virtually for free);

  • An ad agency organises photography contest. Contestants use their own cameras and film. They are given themes/concepts and come up with innovative ideas/snaps. The ad agency spends on promoting the event and some refreshments for the contestants. Post the contest the ad agency retains all the photos (1000’s of ideas — virtually for free);

  • Very recently, two leading business houses in India announced in newsprint and media that they would invest in start ups. They invited entrepreneurs all over the country (and abroad) to register and share their ideas (basic idea, sample model, estimates for commercials). Everyone would be given the opportunity to make an ‘elevator’ pitch. Once again 1000+ ideas virtually for free.

And then there are some blacksheep . . . . . .

  • Remember Speak Asia . . . . if you do some digging you may find that similar schemes were floated in the African continent . . . . very successful . . . . all stakeholders made money. Somehow the idea didn’t click in India.

  • If you have seen Die Hard 4 — the villan uses the skill of amateur hackers to develop a code, this code is used to disrupt systems.

If you look at any of the above-mentioned ideas, you may agree that all of them were simple ideas, brilliantly executed.

What is crowdsourcing and how does it work:

Simply put, crowdsourcing is a distributed problem solving and production model. Typically, a problem is broadcast to an unknown group of solvers in the form of an open call for solution. The ‘users’ or the crowd (i.e., the online community) comes together and submits solutions. Yet another crowd sifts through these solutions and finds the more acceptable/better solutions. These solutions are then owned by the broadcasting agency (i.e., the crowdsourcer). The winning solutions are sometimes rewarded, sometimes monetarily, sometimes with a prize or recognition (i.e., the contributors are paid crumbs and the broadcaster keeps the cake).

Advantages of crowdsourcing:
Without getting in to the ethical aspect of the subject, one needs to appreciate that there are certain advantages that crowdsourcing can offer :

  • Problems can be explored at a comparatively small cost, often very quickly.

  • Possible to achieve a win-win proposition sans monetary compensation — best example is Luis von Ahn’s Recaptcha and the efforts to translate wikipedia’s German version.

  • Crowdsourcing makes it possible to tap a wider range of talent (or prospective customers) than normally feasible — best example — auto industry has been using social media to source ideas from prospective customers — ideas about car design, features, accessories, etc.
  • Resultant rewards have potential of spurring activities — more entrepreneurship, growth in business, investments, employment, etc.

Criticism about crowdsourcing:

  • Once the crowd starts contributing, somebody has to sort and sift through the information. This is a costly affair, unless the right resources are used the costs outweigh the benefits;

  • Given that there is no monetary compensation, increases the likelihood of the project failing. Without money one may face problems with fewer participants, lower quality of work, lack of personal interest in the project/results, etc.;

  • Barter may not always be possible;

  • Risks mitigation through contracts may not be possible since there are no written contracts, non-disclosure agreements or for that matter non-transparency about how the information will be used;

  • Difficulty in managing and maintaining a working relationship with the crowd throughout the duration of the project;

  • Susceptibility to faulty results and failure is still too high.

Though there are several pros and cons, so far the perception has been positive. With the success of ideas like recaptcha and the translation project, people have started believing in crowdsourcing’s potential to balance global inequalities. A rather tall statement, but its still a wait-and-watch situation.

I would like to end this write-up by sharing my experience with crowdsourcing. Sometime ago, I downloaded a free app on my phone called Waze. At the time I didn’t know that it was a crowdsourcing app. However after using the app, I have (kind of) started leaning in favour of crowdsourcing and hope to see more developments in this field.

Waze app:

Waze is a free iPhone app which tries to crowdsource real-time traffic and navigation data. The application has advantages because it provides information which is ‘almost’ real-time and updated. It is quite different from your navigation/GPS systems because apart from providing you information about routes, Waze also provides information about traffic, speed at which the traffic is moving (it’s been a mixed experience for me), information about roads under construction (this is based on user inputs and quite accurate) — if there is a obstruction or an accident and the road gets blocked, users can send an instant update and all users will be pinged instantly.

The best part is that most of the time the user simply has to switch on the application and leave it on. The software keeps tracking your speed (using GPS and your GPRS/3G bandwidth) and broadcasts this information to other users. If your car slows down the app sends you a prompt asking if you are stuck in traffic. The information is broadcast almost instantly (have noted that it is broadcast in 5-10 seconds).

I have been using the app intermittently and have found it quite useful to avoid traffic. Have benefitted from updates quite a few times and that’s why I rate it as a pretty good ‘time-saving app’. While the app is free, there is a downside — the constant tracking can drain your battery and unless you have a good data plan, it will also drain your wallet.

That’s all for this month. Next month is likely to be dominated with the budget proposals, but I promise that I will have some interesting ideas and stories to share with you.

Cheers.

levitra

Microsoft Office 2013 – Part II

About this write-up

MS Office is a popular application software and enjoys wide usage across the world. Recently, Microsoft released the Customer Preview of the latest version of its Office suite i.e. Microsoft Office 2013 (a. k.a Office 15). This write up briefly discusses some of the new features likely to be introduced in the new software, product enhancements to existing features, some pros and cons associated there with.

Background

This write up is the second part of the article on MS Office 2013. The first part dealt with some of the new features that are expected to be a part of MS Office 2013. Some of the features described in the article were:

  • Cloud integration
  • Touch and stylus based interface
  • The new “Metro” look
  • Convenience of editing PDF documents in MS Word 2013
  • Support for Open document format (“ODF”) 1.2
  • Social media-related integration

In this part, we will look at some of the enhancements, new features which are expected to be a part of MS Office 2013. While there are many features that one could write about, given below is a short summary of the changes / new features that you may find useful:

MS Word:

Right from the first moment you start Word, you will notice the crisp new interface. The basic interface has been changed (“Metrified”). The ribbon feature has been changed (Microsoft has made it more flatter) to appear more spacious. One of the reasons for this is that, when the MS Word is used on a smart phone, the look and feel and the user experience while switching from desktop to tablet / smart phone would appear seamless.

Besides the above, cleaning up the main inter-face has the effect of giving more space and allows the user to focus on the document itself rather than the tools (which are supposed to help and not hinder). To be candid, when I migrated from Office 2003 to Office 2007, the one convenience that I appreciated the most, was that the interface allowed me to work on the document / spreadsheet. All the tools that I would need, were neatly organised on the ribbon. Whenever the need arose, they were only 2 or 3 clicks away or (most of the time) just a right click away. I haven’t had the chance to use the MS Word 2013, but I have a feeling that the experience is going to be even better.

The Read Mode feature is yet another feature to look forward to. This feature is particularly aimed at tablet users. As the name suggests, this feature is for reading. When you switch to the Read Mode, the interface is literally reduced to a bare minimum, thus allowing the document to reflow and to fit in to the screen. One can say its almost like the full screen mode. The interface provides “thumb friendly” buttons on either side of the screen for easy navigation. Some users may be in for some disappointment, because this mode allows the reader to read only one document at a time. Duh ….. you wanted to read ….right!!!!, what else do you want???.

The track changes feature too has been improved for better user experience. The user interface in Word 2013 uses a simpler mark-up look, which appears to be less overwhelming (for many) and intimidating (for some) than the earlier red (strikethroughs) and blue (bold/ underline) mark ups. The new Markup view provides final version of the document with indicators in the margin to indicate the sentences which may have been edited. Whenever you are ready to focus on the changes, just click on the indicator line and it will expand into a thread. Users may find this feature particularly useful while collaborating with others.

One of the conveniences that have been discontinued in MS Office 2013 is the option to add spelling in auto correct by right clicking. This feature was introduced way back in MS Office 97 (I think) and was an instant hit. This feature was very useful for correct typos – – the types you make while typing any document for instance you type “o fthe” instead of “of the”. Earlier, all the user needed to do was to right click and instead of just correcting this one instance—add to the auto correct and save the effort for all similar typos. MS Office 2013 will no longer offer this convenience… but don’t despair, you can still go to the Auto Correct menu and add the same. The only difficulty will be finding it.

EXCEL:

Once again, the basic interface has been metrified i.e. looks and feels very crisp. The look and feel is common between all the other applications of MS Office 2013.

If you thought that MS Excel was an outstanding product, the latest version Excel is even better. Microsoft has added some awesome tools, Quick Analysis tool is one them. In the earlier version, if you selected a range of cells with numbers, nothing happened. In MS Office 2013 – Excel, if you select a range of cells with numbers, a QUICK ANALYSIS tool pops up next to the selected range and gives you a variety of options like—Conditional formatting, charts showing most of the information, formulae, tables formats and in cell sparklines (introduced in Office 2010). However, go around any of the options and you will see it either in the data or in one of the pop up charts. The suggestions are intuitive and change according to the data highlighted. While the overall number of options remain the same, the interface would suggest some of the options (such as why a particular chart or a pivot table may be more suitable) which you may find useful.

The next in line is the Chart advisor. An early prototype was featured on the Office Labs, which has now been fully integrated along with other analytical tools in excel. One can say its a plain vanilla version of professional business analytic tools. With the Chart Advisor, the likelihood of you getting the right chart or pivot table in the first attempt itself is far higher…….which many may agree……… translates to tremendous savings in time. Guess that’s one up for artificial intelligence.

The previous avatar of MS Office ie Office 2010 brought in several features which kinda added “jazz” to Excel. The current version ie Office 2013 has focussed more on functionality rather than “jazz”. But that does not mean that there is no “jazz” added in MS Office 2013. As a matter of fact, the error function (ie the indicator which highlight errors or inconsistencies) has been spruced up quite a bit. For instance: if you move between cell or add or delete some figures that lead to a change in some other result or formula, you are likely to see subtle animations to draw your attention to what changes have happened. So what’s new eh!!!!! Well, for starters, if the change is in the area (ie the displayed area / sheet) then the animation is …. let’s just say …..less animated and if its in a different sheet or so …. the animation is …..a bit more animated. If you click the cell, there will be onscreen prompts to lead you to whatever it is that Excel intends to draw your attention to. This makes it much harder to change or delete information that changes your results without noticing that it makes a difference.……sounds exciting, doesn’t it?

Even the error messages are more useful. For instance: suppose you drag a cell across the worksheet when what was really meant to do was to click somewhere else — the older version would give you a fairly “cryptic” warning ….. but this will not be a problem in MS Office 2013— now Excel gives you a warning in far more simple / descriptive manner, suggesting what’s wrong. Add to this, now there is a whole new add-in to look for errors and inconsistencies between worksheets.

Time slicer & the Quick analysis tools are some other tools to look forward to. The time slicer tool helps you to dig further into your data. For instance: it organises data by date, so you can filter down to a specific period or jump through figures month by month to see the differences. The Quick Analysis is like a shortcut of sorts for making sense of your data as it is or one may say that it is a way to preview different visuals i.e. you’ll see various format-ting options, and as you hover over them you’ll see the document change accordingly, giving you a glimpse of what you’ll see if you end up selecting that option. This is quite similar to the formatting and fonts option available since the Office 2007 days.

In MS Office 97, Microsoft introduced the auto fill feature. It’s one of the features that I have come to appreciate over a period of time. It is an excellent tool to use when filling up data in tables. The Flash Fill apparently is a step up. Flash Fill is a feature that recognizes your data patterns to the point where it should be able to predict what belongs in the remaining blank cells and fill them in for you. For example, if you were to make a time sheet spreadsheet detailing on which client time was spent and by which employee, Excel would eventually pick up on the fact of every employee who has worked on the client / specific project and fill up the data for you. For instance: every Saturday is booked for internal filing etc– in theory, you just have to enter some of that data and then go to the Data tab, where you press the Flash Fill button to make it fill in the rest. A bit of caution here …. Feedbacks available indicate that the Flash Fill is not able to interpret / pick on trends in “all” data.

There are several other features to write about but may be in future… once I lay my hands on the official version. Well that’s all for this month… wish you a Happy Diwali in advance.

Disclaimer: The discussion regarding the features and enhancements contained in this write up are based on the various feedbacks/ reviews available on the internet and various magazines, blogs, etc. The purpose of this write up is only to share the knowledge and not to malign any person or product.

Big Data – What is it all About??

fiogf49gjkf0d
About this article

Big Data is not a
very new idea, it’s been out there for quite some time. Nonetheless,
very few people have realised the full potential of this idea. To
highlight a few advantages, Big Data can help businesses become more
efficient, help them in servicing the customers better and at the same
time improve their bottomline. In a completely different sphere of life,
Big Data helps various research organisations track a variety of data,
such as tracking meteorological data, data related to clinical tests
conducted, etc.

Be it business establishments like eBay, Amazon,
Facebook or research organisation like NASA, the UN, Governments across
the world, etc., the one common link for all those who use Big Data is
Technology. This article seeks to create awareness about how technology
is used to store and analyse Big Data. Like all big ideas, there are
several stories – success as well as failure, myths, etc. associated
with it. This article will deal with some of the successes and failures.


Background

Ever wondered how a weather bureau
predicts weather or for that matter, how organisations like NASA, ISRO
monitor space, (in case you didn’t know already – apart from secretly
tracking UFOs) that includes tracking various stars, planets,
meteorites, comets, space crafts, satellites, millions of objects of
floating junk which were in some form or another a part of a satellite
or some cargo carried by the satellites. Also, there is the curious case
of the measurements that scientists do, such as that in a nuclear test,
the Hadron Collider. How about mapping the human genome – did you know
that there are more than a billion unique data sets ?

I know
that sounds hugely futuristic and the question that begs to be answered
is “What do I care” or “How does it matter to me”. Well let’s just say
that what is described above are some of the sources and users of Big
Data. Closer to home or to our everyday life, Big Data is used by giants
like Facebook, Amazon, Walmart, to name a few, for improving customer
experience.

Characteristics of Big Data:

Well, to be
honest, “Big Data” is more like a term which was coined in reference to
the data. What I mean is that, there no “official” definition of “Big
data” or for that matter “Small Data”. But, generally speaking, Big Data
refers to data characterised by four features i.e. volume, variety,
velocity and veracity. To understand this better, let’s take a few
illustrations of these characteristics that are closely identified with
Big Data:

Volume:
Today, businesses everywhere, are awash
with ever-growing data of all types. Conservatively speaking, they
collect huge amounts of data (often the volume is in terabytes – in some
cases petabytes – of information).

For instance, someone like
Twitter would churn x terabytes of tweets created each day, into
improved product sentiment analysis. Someone like General Electric is
likely to convert billions of annual meter readings to better predict
power consumption. One company boasts of systems which track events
(crime related) which can help Governments reduce crime rates.

Velocity:

Sometimes, a few minutes is too late. Certain time-sensitive processes
such as catching fraud, Big Data must be used as it streams into your
enterprise in order to maximise its value.

For instance,
exchanges like the Bombay Stock Exchange, National Stock Exchange etc.,
scrutinise millions of trade events created each day to identify
potential fraud (like the punching error report very recently). Couple
of weeks ago (and even in the past), these exchanges had assisted SEBI
is pinpointing instances of circular trading and front running.

Variety: For the readers of this Journal, data
would mean spreadsheets, word documents, accounting records, etc. But in
reality, there is a vast variety of forms/formats in which data can
exist. In case of Big Data, data may be of any type – structured and
unstructured data, text data, sensor data, audio, video, click streams,
log files and more. Typically, new insights are found when all these
different types of data is put together and analysed from a specific or
variety of specific points of reference.

The classic examples of
this would be Facebook, Amazon etc., and if I may dare to say so,
“Algorithmic trading solutions”. It is said that in some cases, the
“algos” are so advanced that they analyse the tweets and social media
trends for “sentiments” and execute trades on the basis of such analysis
alone.

Veracity:
What role does veracity have to play
here. Imagine this – you spend a fortune, putting in place a system to
collect the data. Thereafter, the data is stored before an analysis is
made. What good would be the collection, storage and analysis, if the
data collected was inaccurate. Further, customers part with the data
willingly (most of the time unknowingly), who ensure that their privacy
is not violated. Statistically speaking, one in three business leaders
don’t trust the information they use to make decisions.

How can
you act upon information, if you don’t trust it? Establishing trust in
Big Data presents a huge challenge, as the variety and number of sources
grows.

Big Data – has been out there for some time:

Most
people go under the assumption that Big Data is a recent phenomenon.
But that’s not quite true. As a matter of fact, companies like American
Express1 and Google have been using Big Data in some form or the other,
to analyse and predict customer behaviour, with a view to enhance
customers’ service and public perception. While this may or may not be
true, the fact remains that the amount of data captured and analysed in
the last two to three years, far exceed the total data (in volume and
variety) captured over the last millennia (at the least).

Big Data – recent changes:
What
most people don’t realise, is the manner and extent to which changes
have taken place in the last couple of years. To begin with, storage
space has increased dramatically, our ability to process such data has
been growing exponentially. One could also attribute some positives to
the technological advancement, development of new analytical models,
etc. Given all these, our need and manner of use, the very application
of such data, has undergone a sea of change (one may say. A change of
epic proportions). Here is why:

  • Walmart handles more
    than 1 million customer transactions every hour, which is imported into
    databases estimated to contain more than 2.5 petabytes of data.
  • Facebook handles 40 billion photos from its user base.
  • FICO Falcon Credit Card Fraud Detection System protects 2.1 billion active accounts world-wide.
  • Decoding the human genome originally took 10 years to process; now it can be achieved in one week.
  • There
    are 4.6 billion mobile-phone subscriptions worldwide and there are
    between 1 billion and 2 billion people accessing the internet.
  •  Between
    1990 and 2005, more than 1 billion people worldwide entered the middle
    class, which means more and more people who gain money will become more
    literate, which in turn leads to information growth.
  • The world’s effective capacity to exchange information through telecommunication networks was
  • 281 petabytes in 1986,
  • 471 petabytes in 1993,
  • 2.2 exabytes in 2000,
  • 65 exabytes in 2007; and
  • it is predicted that the amount of traffic

flowing over the internet will reach 667 exabytes annually by 2013. (Source: Wikipedia)

How big is “Big data”:

Consider this. In 2012, the Obama administration announced the Big Data Research and Development Initiative, which explored how Big Data could be used to address important problems facing the government. The initiative was composed of 84 different Big Data programs spread across six departments. The United States Federal Government owns six of the ten most powerful supercomputers in the world.

Big data has increased the demand of information management specialists due which software giants of the likes of SAG, Oracle Corporation, IBM, Microsoft, SAP, and HP, have spent more than $15 billion on software firms only specialising in data management and analytics. This industry on its own is estimated to be worth more than $100 billion. That’s not all, it’s reported to be growing at almost 10% a year, which is roughly twice as fast as the software business as a whole.

In the Indian scenario, the Indian Big Data industry is expected to grow from $ 200 million in 2012 to $ 1 billion in 2015, at a CAGR of over 83%. Nasscom’s prediction is that Big Data will help the BPO industry move forward as it will help in “evidence-based” decision-making for clients, which in turn has a high impact on business operations.

Can we ignore Big data?

The answer seems to a resounding NO. Why?????? Cause………… To remain competitive, all organisations need to analyse both internal and external data, as quickly and cost effectively as possible. As the world becomes more instrumented, with RFID tags, sensors and other sources, companies are creating more and more data. When paired with external data – like that generated by social media sites – there’s incredible opportunity that is largely untapped and unanalysed.

Parting remarks:

This write-up was intended to be a precursor – to give the readers a basic overview of Big Data. In the next part, we will cover some more ground and delve into some more details, understand what’s all the hype about and whether there is a hidden pot of the gold at the end of the rainbow or not.

Until then, I wish all the readers a Happy Dassera.

Disclaimer: The information/factual data provided in the above write-up is based on several news reports, articles, etc., available in the public domain. The purpose of this write-up is not to promote or malign any person or company or entity. The purpose is merely to create awareness and share knowledge that is already available in the public domain.

Keyboard Short cuts for BlackBerry Devices

fiogf49gjkf0d

TECH UPDATE

This article is about simple keyboard short cuts for
BlackBerry devices. Keyboard short cuts help in improving our typing speed and
in many cases, navigating between applications. The tips mentioned in this
write-up would apply for 8800 series and later devices; these may or may not
work on the older devices.

Everybody likes short cuts :

In general, we are all lazy in one way or another. If one
were to be told that he could do the same task with lesser effort (and without
compromising on the output), the first question he will ask is ‘How do I do
that’ and the answer would be ‘Use a short cut’. While keyboard short cuts like
CTLR+C and CTLR+X and others used extensively, this article is about short cuts
for your BlackBerry devices. Yes ! ! ! ! There are keyboard short cuts for
BlackBerry devices also (i.e., beyond the standard short cuts for copy,
paste and send). Here are some instances, which you may find useful :

Rapidly switch back and forth between BlackBerry applications :

The average desktop or for that matter a laptop contains a
smart chip. The chip is called smart because it contains ‘multiprocessors’. As
the name suggests, these are capable of performing several tasks and executing
processes simultaneously. Among other things, the multiprocessor allows a user
to switch from one task to another without compromising on the speed. The switch
is almost instantaneous when you use a desktop or a laptop. This agility,
however, is not available on your BlackBerry. The explanation is simple; the
BlackBerry device (like the other competing smart phones) uses a simple
processor.

So how does one get around this handicap ?

Simple . . . . . Use a short cut.

The most basic way to switch from one BlackBerry application
to another is to repeatedly hit the ‘ESCAPE’ key while inside a programme until
you get back to your icon screen. From there, you’d scroll your track ball or
wheel to find the next application you want and then click to launch it.

A quicker and more efficient way to go from an active program
to another program is to use a short cut. While inside an application, hold down
the ‘ALT’ key which is directly below the letter ‘A’ key and then click ‘ESCAPE’
the key with an arrow reversing directions and to the right of your trackball on
8000 series devices. While holding down ‘ALT’, you can scroll left or right
between apps, and you need only release the ‘ALT’ key to select a program. (For
this, you need to be using a program i.e., a program needs to have been opened
recently or still running
). You can always access your Home Screen,
BlackBerry browser, Options, Call Log, Messages and a few other applications
depending on your device settings.

Using the event log :

Your BlackBerry’s Event Log displays your system’s recently
run events and processes. If you’re experiencing a problem with your BlackBerry
or having an issue with a specific application or service, information from the
Event Log can be helpful for troubleshooting. And it can be a good BlackBerry
hygiene to clear out the log, to keep your device running smoothly.

To access your Event Log, go to your Home Screen, hold down
the ALT key and then type ‘LGLG’. The Event Log will appear, and you can click a
specific event for more information or hit your BlackBerry MENU key more
options. (The MENU key has seven dots in the shape of the letter B, and it’s
found directly to the left of BlackBerry devices with a trackball). You can copy
event information using the MENU key and tailor your settings to log only the
specific types of events.

Freeing up some memory space :

You can also free up some valuable device memory to help your
device run faster by
clearing your Event Log. To delete your list of events, hit the BlackBerry MENU
key while any event is highlighted and then click ‘Clear Log’. A dialogue box
will pop up asking if you’re sure you want to delete the log. Once you confirm
the deletion, your log will be cleared. (Don’t worry, if your IT Department is
running device management software along with its BlackBerry Enterprise Server,
your company probably has its own record of this event log.)

Reboot your BlackBerry without removing the battery :

Any BlackBerry veteran knows that sometimes it is necessary
to reboot your device after installing a new application, to solve performance
problems, refresh your Smartphone’s memory or fix other minor issues. One way to
do so is to remove your battery door and pull the power pack. After the battery
is returned to the device, your BlackBerry reboots. This gets the job done, but
it’s time consuming to power down the device and then remove and replace the
battery and your battery door won’t fit as snugly if you’re constantly taking it
off.

The quickest and easiest way to reboot is via another
BlackBerry keyboard short cut. To reboot, simply hit ‘ALT’, ‘RIGHT SHIFT’ and
‘DELETE’. (The RIGHT SHIFT key is found on the bottom right corner of the
BlackBerry keyboard and DELETE key is also on the right hand side and has the
letters ‘DEL’ on its face) You might say this is the BlackBerry version of
CTRL+ALT+DEL. After pressing these three keys in tandem, your device powers
down, your LED indicator turns red for a few seconds and the reboot process
commences.

Change your signal strength display from bars to numeric :

Most modern cell phones offer up some form of ‘five-bars’ to
display user’s wireless signal strength, and the BlackBerry default mode is no
different. But if you want more precision than bars can offer, you can change to
the numeric signal strength display mode. The numeric mode shows wireless signal
strength in decibels per mill. watt (dBm), a ratio measured power in decibels
(dB), referenced to one mill. watt (mW).

To switch from bars to numbers, navigate to your BlackBerry
home screen, hold the ‘ALT’ key and enter in ‘NMLL’. The signal display will
then automatically display a dBm value. In general, a reading from -45 to -85 is
considered very strong. Any reading that’s lower than -85 — for instance, -100 —
is weaker. To switch back to bar mode from numeric, just hit ALT again and
retype ‘NMLL’.

The numeric display can be helpful to determine accurately on
how much a wireless signal degrades as you move from place to place. (It’s also
geek chic to read your cellular signal strength in dBm instead of boring old
bars.)

Bring up ‘Help Me’ screen for device, system data:

Your device’s ‘Help Me’ screen displays useful device and system information such as your vendors ID, the version of BlackBerry platform, OS version, your PIN, the International Mobile Equipment Identity (IMEI) number, etc. While most of this information is available at various locations throughout your BlackBerry Options, the Help Me offers a simple way to access all the data on a single screen.

To pull up the Help Me screen, navigate to your Home screen and then press ‘ALT’, either ‘SHIFT’ key and the letter ‘H’. To return to your Home screen, hit ESCAPE or open the MENU and select Close.

That’s all for this month. You can email your feedback to me on sam.client@gmail.com. Do look forward to my next write-up on the topic of cloud computing.

Smart phones a cyber security risk

fiogf49gjkf0d

Computer Interface

Proliferation of smart phones :


To say that we are constantly surrounded by advanced
technologies would be a cliché. What would be even more clichéd is the fact that
every day we see, hear and read about some new development in the field of
information technology. This may be about the next generation televisions or the
newest Apple ‘i’ product or the latest handheld device or other
products/services. These developments have not only made our lives a little bit
easier (A LOT easier if you ask me), they have made us more efficient at the
things we do best (or ‘handicapped’ due to technological innovation as a
naysayer would prefer to say).

In this connection, mobile phones have become increasingly
popular and more affordable over the past few years and thanks to Android,
Blackberry and the iPhone, smart phones are in demand. In fact, a majority of
the mobile devices that are purchased worldwide are a type of smart phone.
People have now started realising that these smart phones are in fact miniature
computers. They run a variant of computer operating systems such as Linux
(Android), Mac (iPhone), and Windows (Windows Mobile), and can do pretty much
anything that a computer can do. Most smart phones also pack powerful
processors, a hefty amount of RAM and a lot of storage space — in some cases up
to 48 Gigs ! (it all depends on the size and depth of your wallet). The downside
is that even though a smart phone is a handheld computer, most users don’t treat
it the same way as their computer at office/home.

Duh ! ! ! ! So what’s the point ? ? ? ?.

Well, to start with, bet you didn’t know :



  • More than 54 million smart phones were shipped worldwide in the first three
    months of this year, a 57% jump from a year ago, according to research
    reports.



  • Less
    than 40% of the users (as per recent surveys) follow the practice of securing
    their smart devices. As a natural corollary, the vast majority doesn’t even
    bother securing their mobiles, PDAs or smart phones by using, and regularly
    changing, a password or PIN.



  • The
    information that many of us keep on our mobile phones : phone numbers,
    addresses, birthdays and even bank account numbers, is the just the kind of
    information which, in the wrong hands (half-robinhoods), can be used to
    perpetrate frauds (which would include re-creating your identity — please
    refer to my write-up on Facebook frauds — Stranded in London).



  • It
    isn’t just the user of the phone who is at risk, but also the organisations
    they work for (especially since many of us use the same device in both our
    work and personal life). The reality is that any gadget that has access to the
    Internet presents a risk to an organisation if the user doesn’t secure the
    device properly.




  • Smart phones are very susceptible to being hacked and catching viruses, in
    some ways even easier than a computer.



  • All
    of the above facts are not lost on cyber criminals.



If you still think the above is the stuff we see only in
Hollywood thrillers, then read on.

Smart phones the weak link :

Most people purchase their mobile devices solely based on the
number of ‘cool’ applications that it can run. The more apps the better,
right ?
Wrong. Cyber criminals love this idea of an ‘Application
Market’, ‘Store’, or whatever one may want to call it, because now they can
transmit malware easily throughout the world without having to put forth any
effort at all. All you need to do is download an infected app and BAM ! Your
phone is infected.

In January 2010, a mobile application developer (who goes by
the name of ‘Droid09’) uploaded a malicious application to the Android App Store
that posed as the ‘Official First Tech Credit Union’ banking application. This
application was nothing more than a way to steal personal information like
banking logins and passwords. Eventually, the application was removed, but not
before a few customers felt the effect of this rogue application.

Similar to this a Trojan malware virus directed at smart
phones running Google’s Android operating system. The Trojan, named Trojan-SMS.AndroidOS.FakePlayer.a,
infected a number of mobile devices. Once installed on the phone, the Trojan
begins sending text messages, or SMS messages, to premium rate numbers — numbers
that charge a fee — without the owners’ knowledge or consent, taking money from
users’ accounts and sending it to the cyber criminals.

In both instances of a Trojan on the Android platform was
mainly affected only by spyware (a software that obtains information from a
user’s device without the user’s knowledge or consent), and phishing attacks (a
process used by cyber criminals to acquire a user’s personal information by
masquerading as a trustworthy entity in an electronic communication). Needless
to say that the motive behind this attack was profit.

(while I have cited 2 instances of Trojans on Android, let
me assure you there are equal or more on the other systems. Press reports
suggest that there are as many as 500 viruses and many which are capable of
attacking the all the popular platforms.
)

News reports suggest the proliferation of smart phones is the primary contributor (thats like saying marriage is the root cause of divorce). And now with smart phone use becoming more widespread, the bad guys are looking at web browsing and the downloading of web applications (apps) as two ways to attack Android handsets, iPhones, BlackBerrys and Windows Mobile smart phones and spread those malicious web apps. Some of these viruses are rumoured to have the capability of harvesting or erasing stored phone numbers and text messages and retrieving information that can be used to disclose a user’s location.

The rising tide:

According to a well-respected security firm, the reason there haven’t been more mobile phone attacks is because Windows XP computers are still the easiest devices to exploit. And although Microsoft no longer supports it, the Windows XP operating system is still extensively used throughout the world. But as XP disappears, the cyber crooks will begin looking to smart phones, because it’s easy to make money exploiting them.

While smart phones running any operating system can be targeted, speculation is that those running the iPhone, Android and Symbian operating systems will be the targets of choice for the criminals. This is because they are the most commonly used. So far attacks on smart phones have mostly involved tricking users into clicking on a link and divulging personal information. But one can expect to see mobile smart phone worms, a form of malicious software, that replicate and automatically spread to everyone listed in a phone’s address book. Such a worm could spread an infection worldwide in only a couple of minutes.

Mainstream security firms are predicting that in 2011 smart phones are likely to be attacked by more malware, sophisticated data stealing Trojans. These attacks could be launched by targeting social networks, HTML 5, stealing digital certificate (like Stuxnet), among other things.

In conclusion one can say that viruses and other malware have long been a threat to computers only. But as smart phones become too smart (for their own good), the bad guys are likely to target them more and more with viruses. And as has already happened with computers, the smart phone assault is expected to be led by cyber criminals aiming to turn a profit. Characteristically, there seems to be a lag between adopting new technology and taking the appropriate action to secure it. Simply put, first we embrace it, then we become aware of the potential risks it may bring, and only after that do we make the effort to secure it in order to better protect ourselves. We went through the same cycle with the introduction of email and learning the value of anti-virus and anti-spam protection, and more recently with social networking (and the need to be careful about what information you make publically available). We are now going through that cycle with Internet-enabled mobile devices.

The risk increases significantly when you consider that a vast majority of employees in any company use at least one self-purchased technology device at work.

The sad part is that many organisations have not yet caught up with the security protection and policies that the latest mobile gadgets require.

As a parting shot, just think about it: There are more phones on the planet than computers. And it’s easier to steal money from phones. Are you prepared to deal with this eventuality?

Social Networking – Be Careful Out There – II

fiogf49gjkf0d
About this Article

This write up is Part 2 of the three part series on the topic. The previous write-up was aimed at creating awareness about some of the myths and misconceptions related to the use of social networking sites.

While the recent events have had the effect of an eye opener for some people, there are many others who throw wind to caution


This article highlights some simple steps and safe practices which may help in making your experience a safe one rather than a sorry one.

Background

The previous write up briefly discussed some of the myths and misconceptions related to the use of social networking sites. It also focussed on the complete lack of awareness on how personal information is stored, accessed and made available on the internet. The more shocking revelation being that the information is, more often than not, revealed with or without the permission of the person who was most likely to be affected by such a revelation.

 The key takeaways from the previous write up were:

• Social networking sites aren’t responsible for your privacy…. you are!!!

• Default settings on the site, may or may not provide adequate protection.

• When social networking sites change their privacy policy, they may or may not tell you about the changes made, more importantly they may not tell you how your “personal” information is about to become a more public.

• The privacy policy of the social networking site does not extend to its partners (i.e. app and other third party service providers).

• When something is provided to you free of cost, it doesn’t mean that there is no cost attached. On the contrary, it means that someone else is footing the bill. And that ‘someone’ is going to extract something of value (like your private info) in return.

• Social networking is a paradox – you are posting data meant to be private on a medium which is meant to be public.

Risks

Very recently, Facebook acknowledged that their servers were hacked. While the company said that there was no data loss/damage done, there is no way of knowing for sure whether that was a fact. This may come as a surprise to some people, however, for others it was something that they always expected to happen.

Given the nature and amount of data collected and stored by some of the social networking sites, it was obvious that sooner or later, they would be targets of cyber criminals.

A curious person would ask what does the social networking site have that may be of interest to anyone other than the users? Or the information posted by me is harmless, what damage can the hacker do to me?

A short list of the risks involved is as under:

• All your private information, either about yourself or your friends, their likes or dislikes will be compromised.

• Someone could use this information to bully you or cyber-stalk you or your friends.

• The information may be used for inappropriate or illegal purposes including phishing, cyber frauds, hacking someone else’s account, etc.

 • It is also possible that your ‘views’ about someone or something may be disclosed to the very person and there would be consequences.

 • Your name, details may be used to spread viruses, spam, malware, etc

• Someone may hijack your email account or Facebook page and post some damaging information.

Steps to Safe Social Networking Experience

 It is important to remind the readers that there is very little we can do against a prolific hacking attack or a skilled scamster. After all, considering that the networking sites with all their resources couldn’t do much, can you do any better? It is therefore imperative that you take steps to reduce the impact of any damage that may be caused. Listed below are a few ‘counter measures’ that may be useful:

Don’t succumb to peer pressure:

Peer pressure is like a double edged sword, at times it forces you to excel and then there are times when you succumb to it and in that moment of weakness, sometimes, it leads to disastrous consequences.

Don’t let peer pressure or what other people are doing on these sites convince you to do something you are not comfortable with. Stay within your limits. Remember, just like the spoken words cannot be taken back, what you post on these site cannot be erased (not very easily). It will remain in the system no matter what.

Keep personal information out:

Generally people have a tendency to post personal information like their phone number, photos of their home or their work place, school or date of birth, etc.

Just stop for a minute and think about it. This is the same information that a hacker would be need to access your bank account, your credit card, etc. Do you really want to leave this information out in the open?

Keep your profile closed, allow only your friends to view the profile. Else, for a skilled hacker or a scamster, you would be a sitting duck, ripe for the kill.

Mask your identity:

Be very wary of posting any personal data. If possible use a nick name or an alias (commonly referred as a ‘handle’).
It’s very easy to set up a separate email account to register and receive information from the site.

The advantage being that should you even feel the need to close the account or stop using the social networking site, you needn’t stop using your primary mail account.


Use strong passwords:

Remember, the password is the weakest link in the chain. Birthdates, location, nicknames are too common, you don’t need to be a super computer to figure out these types of passwords. The hacker will have a look at your profile and the information will be sitting right in front of his eyes.

Make sure that you use a combination of upper and lower case plus numbers and special characters. It doesn’t have to be very difficult.

Common daily use sentences like ‘I travel by western railway’ can also be converted in to a unique password by making use of a combination of upper and lower case characters along with symbols. Something as obvious as BCAS 2013 can be written as ‘8©@S2013’ and it would be become 10 times more difficult to guess or hack, yet easy for you to remember.

Social networking vs. venting out

Social networking and venting out are two seperate things. Remember that what goes online stays online.

Don’t say anything or publish pictures that may cause you or someone else embarrassment.

Never post comments that are abusive, or those that may cause offence to either individuals or groups of society.

Recently, many companies have started (re)viewing current and prospective employees’ social networking pages. The slightest indiscretion and you are likely to be on your way out.

What you say can and will be used against you

Who actually owns and who controls “your” intellectual content that you post is not as clear as you might think. This also raises the question: If you don’t own it, can you really control it?

 Terms of usage vary with every social networking service. It is more likely, that as soon as you sign up, you give up control of how your content may be used.

Be careful in choosing your friends:

It’s an age old advice. Be that as it may, it applies to your offline as well as online friends. Be wary about who you invite or accept invitations from. Be aware of what friends post about you or reply to your posts, particularly about your personal details and activities.

Never disclose private information when social networking. Most importantly be careful of clicking on links on an email or social networking post, even if its from your friend (in some cases specially if its from your ‘friend’)

One of the biggest mistakes you can make is to accept friend requests from people you don’t know. When you do that, you are inviting people you know nothing about to share your personal information.

When your friends share information about you on their networks that you’d rather keep private, contact them and request them to remove the damaging information. Some sites may also permit you to remove any tags that your friends use to identify you in their posts

Guard against phishing:

Be guarded about who you let join your network. Use the privacy network to restrict strangers from accessing your profile. Be on guard against phishing scams, including fake friend requests and posts from individuals or companies inviting you to visit other pages or sites. If you do get caught in a scam, make sure you remove any corresponding likes and app permissions from your account.

Don’t be afraid to block specific users or set individual privacy settings for certain sensitive posts and information.

While all of the above discussed ‘counter measure’ may not offer complete protection, you may be saved from a total disaster. After all, prevention is always better than the cure.

The next write up (the third and concluding part) will deal with the specific issue of changing your privacy settings (i.e. location) and some basic steps on what to do if your account is hacked.

Using the Internet for mass collaboration

fiogf49gjkf0d
About this article:

This article is based on a video of Luis von Ahn aired recently on a popular site i.e., www.ted.com. The video itself was recorded sometime around April 2011.

Every once in a while you come across something, an idea or a vision, that knocks you down completely. The thing that strikes you the most, is the simplicity. This article is about one such idea and how few individuals have used their minds to harness energies of millions and millions of people to help make a difference.

The Internet, as a resource, is viewed differently by different individuals. For some it is a source of information and knowledge, for others it is a means of earning a livelihood, and then there are those who are able to use their limitless imagination and ingenuity to effortlessly harness the power and labour of millions and millions of individuals, to achieve the unbelievable or the next to impossible.

One honest confession I need to make is that, while I had heard about mass collaboration and had seen its practical application (one of which is Wikipedia), but, when I first saw this video, I was completely awestruck and blown away.

Here are a few statistics to tell you why:
  • Currently, more than 350,000 websites are using these ideas ?
  • Time spent per day is equivalent to 500,000 man-hours ?
  • The number of words digitised by these sites exceeds 100 million a day — that’s the equivalent of effort required to digitising (approx) 2.5 million books a year ?
  • The effort put in, is all done one word at a time/10 seconds per person by approximately 500 million people.
Mind you ! ! ! this is just a sample of what limitless imagination and ingenuity can achieve.
So what is this mind boggling, out of the box idea, that I am raving about? Well . . . . . . . all I can say are three words CAPTCHA, RECAPTCHA & DUOLINGO.
CAPTCHA:

Captcha = Completely Automated Public Turing test to tell Computers and Humans Apart
Whats that . . . . . you said ? ? ? Is a very common response, so let me translate that into non-geek language.
Let say you are trying to register or log into sites like Google, Facebook, Twitter (and several others) and you see some oddly distorted letters/words (see picture below).
These seemingly innocuous letters (or text pieces) are a common site today. While most recognise these as a security feature, lesser number of web surfers know that these are tools for identifying whether the person accessing the site is a human being or a computer (bot) and hence the name – Completely Automated Public Turing test to tell Computers and Humans Apart.
For those of you who are unaware, unlike humans, a ‘bot’ cannot read distorted words. When you type the (correct) words in the box, it proves that you are human and the website allows you to register/access content/purchase goods/make reservations, etc.
Over a period of time Captcha has become (almost) a standard security feature. In the video von Ahn revealed that (by April 2011) there were more than 350,000 websites using Captcha and some approximately 500 million users every day were spending 10 seconds each while accessing various e-commerce sites.
The first reaction to the above is ‘WOW’ — 350,000 websites, 500 million users. von Ahn too felt a sense of pride that his invention was being used by so many people, but then he also thought that each of these 500 million users were spending 10 seconds each during the verification process, this translated to 500,000 man-hours (approx). Then came the thought, “Is there something I can do to utilise this effort to do something — something huge but simple — something that machines cannot do (as yet) as efficiently as humans can?” Needless to say that stopping the use of Captcha, given its benefits, was not an option. This thought was the seed to another research, resulting in what is commonly known as RECAPTCHA.

RECAPTCHA: von Ahn and his associate/intern came up with this idea on the basis of the findings of their research. The idea was to use the efforts of the 500,000 man-hours to digitise books. There are several projects doing this already, including one being pursued by Google. It is common knowledge amongst most people who are involved in the endeavour to digitise books, that computers and more specifically, optical character recognition (OCR) technology is applied for digitising books. And that typically, this involves one person using a scanner device to scan one page at a time and then wait for the OCR software to convert the scanned image in to a document. What is not very commonly known (at least with the public at large) is that the technology is not 100% accurate. Machines and for that matter computers/ software, at times, are not able to ‘recognise’ many of the characters that are scanned by them. This is more so when the book being scanned is older than 10 years. The difficulty arises due to a variety of factors such as the typeface used, yellowing of the pages, creases in the pages, wear and tear/ condition of the book. In all such cases, human effort is required (computers cannot do it as easily as humans). Thus, RECAPTCHA was born. Once again the idea was a simple one, the visitor was presented with two words (instead of one in Captcha) one which was known to the software and the other which was required to be ‘recognised’. When both words were recognised, the visitor was granted access to the site he was visiting. All the time, in the background, RECAPTCHA was comparing this result with the response provided by another 10 users (who were given the same combination). If the result matched, then another word was digitised.

Once again the idea was a runaway success — The number of words digitised by these sites exceeds 100 million a day — that’s the equivalent of effort required to digitising (approx) 2.5 million books a year. Given the success, RECAPTCHA was acquired by Google.

von Ahn and team revisited their question and embarked on a yet another journey. This time they decided that all the parties involved in the process should have something to gain — in captcha human effort was used to verify their status as humans. While this helped the website owners, it resulted in wastage of human effort. Recaptcha used this human effort to convert books — once again website owners and book readers gained- nothing for the visitors who were assisting in the digitising process were not being compensated. This thought gave birth to DUOLINGO.

DUOLINGO:

Just like digitising books, translating content is another ‘skill’ which the machines/software do not posses (as yet). It’s one thing to merely translate words and a different thing to translate the words with context. It is the context in which the words are spoken, which makes the text readable and by that measure more comprehensible. If you don’t believe try using the translators available for converting a poem in Hindi to English and vice versa (no offence but its like watching a Chinese movie — dubbed in Tamil — the tone/pitch of the dialogue or a fight scene versus the body language — I have always found it hilarious — try it sometime). Coming back to the topic . . . von Ahn and team came up with the idea of DUOLINGO.
What von Ahn and team realised was that there was content on the web which needed to be translated. The video has cited the example of translating content on the English version of Wikipedia to Spanish version — currently the Spanish version is only 30% of the English version and the cost of converting the same — as the video suggest — from the lowest cost vendor, based on the effort of exploited labourers in a third-world country- was $ 50 million. Cost apart the other quandary was where do you find enough people who know more than one language and were willing to participate in the translation process. The solution was that there are hundreds and thousands of people who want to learn another language, they have to pay money to learn, here was an opportunity to learn and apply at the same time — without spending anything from their pockets. Now there is a win-win for almost all !

  •     Content can be translated
  •     With context, translation is easier, fun, improves the learning/experience
  •     The accuracy is far higher than that offered by software currently available and almost comparable to the accuracy of a professional translator
  •     Both parties don’t pay money but put in their ‘efforts’
  •     Both parties gain
  •     And on the hindsight lesser exploitation of labour

The result: based on current stats the translation can be done in a matter of weeks.

Now that’s what I call innovation.

Like I said earlier, I was completely blown away when I saw the video, I am sure after reading this write-up (and maybe watching the video) you will be too.

Wish you Merry Christmas and a Happy New Year.

Disclaimer:

The purpose of this article is not to promote any particular site or person or software. The sole intention is to create awareness and to bring in to limelight some thought-provoking content.

Social networking – Be careful out there – I

fiogf49gjkf0d
About this Article

Social networking is “hep” and the “in thing” nowadays. Entire generation Y is hooked on it. Undoubtedly, it is a convenient way to connect with family, friends and other people. But that’s the bright side, what most people don’t realise is that there is a dark side too. This article is aimed at highlighting some of the perils of social networking sites specifically related to the privacy of the account holder.

Background

Today, it’s a common feature to see teenagers hooked on to social networking sites all the time, as if it were a life support system. What’s more, teenagers are likely have several friends and connections online or in the virtual world, even when continents, distances and time zones may separate them. Sometimes, it is at the cost of having friends in the physical (or real) world.

Come to think of it, it really isn’t all that different from the past. I mean that, once upon a time it was “hep” to have pen friends, then email, bulletin boards and chat rooms became a fad. One could say that it’s the same old wine in a new bottle – today you have friends, followers and connections on Facebook, Twitter and Linkedin (to name a few popular social networking sites).

Agreed, it’s a convenient way to connect with family, friends and other people with common interest. And with the technological advances today, it’s almost effortless, because the site does all the work of finding all your “long lost” friends, colleagues and relatives. Many times, these sites offer “suggestions” regarding people you may be interested in connecting to or groups you may want to follow. This, you may say, is the bright side of social network. How- ever, what people don’t know (or care enough to know) is that there is a dark side as well.

“Nahhh!!!!! Can’t be!!!! Social networking is harmless banter, we are jus hangin out, what’s wrong with that?????

Chill yaar, you are just being paranoid.” I am sure that you have heard this before. Well, you are about to get a rude awakening.

The Dark Side

Couple of weeks ago, a furore was raised in the press and all over the internet, when 2 teenaged girls were hauled to the police station for posting some innocuous status updates on one of the popular social networking sites. While a lot is written on how the law enforcers should have acted, how draconian the internet law is when it came to the freedom of speech and of course, the whole debate of what should be done (or should not be done) and who is responsible (or irresponsible). Despite all this noise and chatter about the who, what, where and when, most people missed out on a little known ‘open secret’. What’s this ‘open secret’ you may ask.

Well, forget all the chatter and the noise for a moment and think, how many people actually gave a thought to the following:

• How did the mob come to know of the “personal” post?

• Were they friends with the teen who posted the message?

• Did the teen intend that persons other than her “friends” see the post/tweet?

• Can persons other than one’s friends see his/ her posts/tweets?

• How can anyone see my posts /tweets?

• And of course, the million dollar question that begs to be answered –

How did they get the address of the teen who posted the update and the vital information that the teen was located within (ahem) striking distance? This question becomes a ten million dollar question when you ask, if they were not friends and they were not connected, were they supposed (allowed) to see such personal information (i.e., Location of the person putting up the post).

In all the printed press, news reports, countless Tweets and Facebook updates there is hardly a peep into these questions. If you were a conspiracy theorist, you would know for sure that “something just ain’t right here”. You may have guessed it by now …. Nobody noticed (and all probability likely to remain unnoticed) that the real transgression was the “a compromise of the privacy of your personal data”.

By the way, if you didn’t ask this question earlier, then it would be a good indicator that you too have chosen to remain blissfully unaware of “what’s out there”.

The Ugly Truth

SOCIAL NETWORKS AREN’T RESPONSIBLE FOR YOUR PRIVACY – YOU ARE. What most people (individuals who use social media regularly and extensively) is that you are parting with some very vital and sensitive personal information right from the time that you open an account with these social networking sites. It’s pretty standard to give information such as your full name, where you live, what you do, what you like (or dislike), your date of birth. You post pictures of you and your family, your precious possessions, your triumphs, etc. And to top it all, you literally “strive” to keep this information updated every day (and in some cases-every waking moment). You take solace (my view–choose to remain blissfully unaware) in thinking that:

• this information is with the site;

• it’s secure, behind layers of security;

• they have a privacy policy, they can’t share it with any one;

• only my friends and connection can see it;

• It’s harmless banter (yeah!!!, really!!! Do make it a point to tell it to the mob when they come visiting);

• I will delete it after some time But as they say “Ideal and real” are two completely different and mutually exclusive things. Some open secrets that you must know:

Default Settings:

When you sign up, the social networking site sets your privacy controls to “default settings”. I am sure there would be several instances wherein you have accepted the prompt that the settings are at default albeit without really checking or understanding what “default settings” really means. In some cases, default means that everyone can read your post and access all the information that you give the site.

Changes in Privacy Policies

While some people are wise enough to check what the default setting is, they sometimes fail to keep track of changes in the privacy policy of the social networking site. What people do not account for is that Privacy Policies can change. In some cases, these site notify you, but in many cases, by continuing to access the site or using the service you “by default” agree to the revised Privacy Policy. How is that possible you ask, I have a right to be informed, they have to tell me !!!.

Don’t they ?????? All these questions are the types you ask after reality knocks you down. The truth is that, it all boils down to terms and conditions of service, YESSSSS, the one’s where you click “AGREED” without even bothering to read what they say, let alone understanding what the implications are.

Somewhere in the fine print, there are terms which say that “the service provider is at a liberty to alter the terms of Privacy Policy and that it is your obligation to look them up on a regular basis. Further that, if you continue to use the site, it will be presumed that you have read the Privacy Policy and have agreed to the revised terms.

Here is a question for you. Google and Facebook both have revised the terms of the Privacy Policy (mainly their Policy on what data will be collected and how they intend to use it). They were “kind” enough to send a mail/notification about the change and date from when the policy will become effective. How many of you saw this mail in your inbox/notification when you visited the site? More importantly, how many of you made an attempt to see “broadly” what changes are likely to take place. If you haven’t done it as yet, then be rest assured that you will have no one but yourself to blame.

Apps and Games

If you think that you have covered all bases by reading the Privacy Policy and having understood the terms have agreed to it and acted very cautiously, even then one could say you have left yourself exposed. Sure you read the Policy for the hosting site, but what about the apps/ games that are made available on the site? More often that not, if your friend has been using it or recommends it, you too sign up because you want to be with the gang and cannot fall behind. Well, if that is the case, I would say you covered the pin holes, but left the manholes wide open. It is quite possible that these app/ games/utilities may have a policy which is quite different from the hosting site and it might not be very protective.

Difference Between Free and Freemium

Just because the service is free or it doesn’t cost you anything doesn’t mean that there is no cost attached. It only means that the cost of providing service to you is being borne/ subsidised by someone else i.e. what is offered to you for free is sold to someone else for a premium (hence the word freemium). Everybody and I mean everybody (there may be a few exceptions like the Khan Academy) who is providing some free service to you, is selling the data that you generate, in one way or the other, to somebody else. You may not believe it, but every time you say “you like something”, this data is collected, collated and analysed for future sale. Every comment about a product, a service, a brand, etc be it good or bad, is tracked and stored for future sale. Not only this, if you like a brand, there is a very high probability that the very same social networking site (if not this one then some other site also) will help the brand to sell “what you like” to your friends.

Paradox of Social Networking and Privacy:

It’s a paradox because, you are posting your personal and private data on a media whose reason for existence is to promote “openness”. So, on one hand you want the data to be in the public domain and at the same time, you don’t want anyone to see it. Funny isn’t it!!!! Reminds me of the famous quote from Shakespeare’s Hamlet “To be or not to be, that is the question”.

While there are several issues that still need to be dealt with, but woh kissa phir kabhie.

The next part of this series will focus on some tips on dos and don’ts while posting on social networking sites.

Disclaimer: The information/issues discussed in the above write-up is based on several news reports, articles, etc., available in the public domain. The purpose of this write- up is not to promote or malign any person or company or entity. The purpose is merely to create awareness and share the knowledge that is already available in the public domain.
    

Social Networking – Privacy Settings in Facebook

fiogf49gjkf0d
About this write-up:
This is the third and concluding part of the three part series dealing with security related issues faced when using popular social networking sites. This write-up deals with some of the settings and describes how and when these settings should be activated. While the suggested changes in the security settings may not guarantee that your personal information is not divulged to the unknown persons, it however, would act as a simple barrier to unwanted prying eyes.

Background

The previous two write ups briefly highlighted how social networking sites are a boon as well as a bane. Boon because they help you to reach out to your friends, contacts, etc. They also help you connect with like-minded people. However, what most people don’t realise is that, you may be parting with a lot of personal information, more than you bargained for and as a matter of fact, more that you even know or comprehend. It is a known fact that unscrupulous people can use this information for their own gains. It is a known fact, notwithstanding this, whenever disaster strikes, the people who are affected, more often than not, realise that they were sitting ducks.

Need for privacy

Social media sites, as we all know, permit us to meet /connect with other people on the net. Initially, we start off with close friends and relatives, whom we look up on facebook almost as soon as we open an account. It’s quite likely that they may have asked, you are you on Facebook?? Why aren’t you on Facebook??? You know….. giving you the feeling that everybody had boarded the bus to paradise and you were the only person left behind. So first of all you connect to them. Also you put in all the small–small personal details about yourself such as which school/college/ university, date of birth, locality where you stay/work, your chosen profession, likes & dislikes (yes that too), etc. All this information is careful and meticulously ‘harvested’ in humongous databases (read my write up on Big Data).

The next step in the process of ‘networking” is to ‘connect’ with like-minded people on Facebook. Suddenly, you will start getting prompts, suggesting that such and such person has a similar trait and therefore you may connect. What you don’t know is when you started punching in personal information, an intelligent algorithm was working behind the scene and putting all the pieces together. If not that, it was creating a ‘footprint’ for others to ‘find’ you.

While this seems convenient and intuitive to you, what most people don’t realise is that this very information can be used to ‘target’ you for something nefarious. It is in your interest that you don’t expose yourself to such risks. In order to do that you need to review your privacy settings and tweak them in a manner that permits you to connect with ease, but the same is protecting you from the villains lurking in the shadow.

Privacy settings

Activating or deactivating privacy settings can also be described as drawing a line (something like the proverbial Laxman rekha one might say), a line beyond which you want to keep intruders out. Conversely, one may say that you draw the line also to create a boundary beyond which your personal stuff doesn’t go. Mind you, just like in what has been said in Indian mythology, the villans will try every trick in the book to lure you, it is for you to realise what’s in your own interest.

Very briefly, the security setting (on Facebook) can be used to:

• Manage how you connect with others
• Select the audience with whom you want to share your personal stuff, and
• Manage how others connect with you (mainly photo tagging)

STEP 1: Manage how you connect with people

In order for you to manage, you first need to know:

• Where to find your privacy settings (a bit obvious, I know, but just in case you didn’t know) • Privacy shortcuts
• Controlling who can send you friend requests
• Changing the filter preferences for your messages
• Who can see your profile pictures (reminded me of a scene from Shah Rukh Khan Juhi Chawla starrer….where apro SRK says KKKKKKiran….)

So, first things first:

Where are my privacy settings?

To view and adjust your privacy settings:

1. Click in the upper-right corner of any Facebook page
2. Select Privacy Settings from the dropdown menu
3. Click on a setting (ex: Who can see your future posts?) to edit it, or use the left column to view your other settings

What are my privacy shortcuts?

Your privacy shortcuts give you quick access to some of the most widely used privacy settings and tools. Click at the top right of any Facebook page to see shortcuts that help you manage:

• Who can see my stuff?
• Who can contact me?
• How do I stop someone from bothering me?

This is also where you’ll find the latest privacy updates and other helpful tools. The shortcuts you find here may change over time to reflect the settings and tools that are most relevant.

Controlling who can send you friend requests

By default, anyone on Facebook can send you a friend request. If you’d like to change who can send you friend requests:

1. Click at the top of the page.
2. Click Who can contact me?
3. Choose an option from the dropdown menu below Who can send me friend requests?

Changing the filter preferences for your messages

You can change your filter preferences right from your inbox:
1. Go to your Other Inbox
2. Click Edit Preferences
3. Select Basic or Strict filtering
4. Click Save

Messages that are filtered out of your inbox will appear in your Other folder. If a message you’re not interested in gets delivered to your inbox, select Move to Other from the Actions menu. Keep in mind, anyone on Facebook can send you a message, and anyone can email you at your Facebook email address.

Who can see your profile pictures

When you add a new profile picture, here’s what happens:

• The photo is added to your timeline and appears in your Profile Pictures album.
• A thumbnail version of the photo is made and appears next to your name around Facebook. This helps friends identify your posts and comments on Facebook.
• Your current profile picture is public. You can change who can see likes or comments on the photo.

Step 2: Select the audience with whom you want to share your personal stuff

This includes:

• When I share something, how do I choose who can see it?
• How can I use lists to share to a specific group of people?
• Can I change the audience for something I share after I share it?
• How do I control who can see what’s on my timeline?
• What is my activity log?

When I share something, how do I choose who can see it?

You’ll find an audience selector tool most places you share status updates, photos and other stuff. Just click the tool and select who you want to share something with.

The tool remembers the audience you shared with the last time you posted something, and uses the same audience when you share again unless you change it. For example, if you choose Public for a post, your next post will also be Public unless you change the audience when you post. This one tool appears in multiple places, such as your privacy shortcuts and privacy settings. When you make a change to the audience selector tool in one place, the change up-dates the tool everywhere it appears.

The audience selector also appears alongside things you’ve already shared, so it’s clear who can see each post. If you want to change the audience of a post after you’ve shared it, just click the audience selector and select a new audience.

Bear in mind, when you post to another person’s timeline, that person controls what audience can view the post. Also that, anyone who gets tagged in a post may see it, along with their friends.

How can I use lists to share to a specific group of people?

Lists give you an optional way to share with a specific audience. When writing a post or sharing a photo or other content, use the audience selector to pick the list you want to share it with.

Can I change the audience for something I share after I share it?

Yes, you can use the audience selector to change who can see stuff you share on your timeline after you share it. Keep in mind, when you share some-thing on someone else’s timeline, they control the audience for the post.

How do I control who can see what’s on my timeline?

•    You can share basic information like your home-town or birthday when you edit your timeline. Click Update Info (under your cover photo) and then click the Edit button next to the box you want to edit. Use the audience selector next to each piece of information to choose who can see that info.

•    Anyone can see your public information, which includes your name, profile picture, cover photo, gender, username, user ID (account number), and networks.

•    Only you and your friends can post to your timeline. When you post something, you can control who sees it by using the audience selector. When other people post on your timeline, you can control who sees it by choosing the audience of the Who can see what others post on your timeline setting.

•    As you edit your info, you can control who sees what by using the audience selector.

•    Before photos, posts and app activities that you’re tagged in appear on your timeline, you can approve or dismiss them by turning on timeline review. Keep in mind, you can still be tagged, and the tagged content (ex: photo, post) is shared with the audience the person who posted it selected other places on Facebook (ex: News Feed and search).

•    Set an audience for who can see posts you’ve been tagged in on your timeline.

•    To see what your timeline looks like to other people, use the View As tool.

What is my activity log?

Your activity log is a tool that lets you review and manage what you share on Facebook. Only you can see your activity log.

Step 3: Manage how others connect with you— mainly photo tagging

This includes

•    How do I remove a tag from a photo or post I’m tagged in?
•    What is timeline review? How do I turn timeline review on?
•    How do I review tags that people add to my posts before they appear?
•    How do I control who sees posts and photos that I’m tagged in on my timeline?
•    How can I turn off tag suggestions for photos of me?

How do I remove a tag from a photo or post I’m tagged in?

Hover over the story, click and select Report/Remove Tag from the dropdown menu. You can then choose to remove the tag or ask the person who posted it to take it down.

You can also remove tags from multiple photos at once,

1.    Go to your activity log
2.    Click Photos in the left-hand column
3.    Select the photos you’d like to remove a tag from
4.    Click Report/Remove Tags at the top of the page
5.    Click Untag Photos to confirm

Remember, when you remove a tag, that tag will no longer appear on the post or photo, but that post or photo is still visible to the audience it’s shared with other places on Facebook, such as in News Feed and search.

What is timeline review? How do I turn timeline review on?

Posts you’re tagged in can appear in News Feed, search and other places on Facebook. Timeline review is part of your activity log and lets you choose whether these posts also appear on your timeline.

When people you’re not friends with tag you in a post, they automatically go to timeline review. If you would also like to review tags by friends, you can turn on timeline review for tags from anyone:

1.    Click at the top right of any Facebook page and select Account Settings

2.    In the left-hand column, click Timeline and Tagging

3.    Look for the setting Review posts friends tag you in before they appear on your timeline? and click Edit to the far right

4.    Select Enabled from the dropdown menu

How do I review tags that people add to my posts before they appear?

Tag review is an option that lets you approve or dismiss tags that people add to your posts. When you turn it on, then anytime someone tags a photo or post you made, that tag won’t appear until you approve it. To turn on tag review:

1.    Click at the top right of any Facebook page and select Account Settings

2.    In the left-hand column, click Timeline and Tagging

3.    Look for the setting Review tags friends add to your own posts on Facebook? and click Edit to the far right

4.    Select Enabled from the dropdown menu

When tag review is on, you’ll get a notification when you have a post to review. You can approve or ig-nore the tag request by going to the content itself.

Its important to highlight that when you approve a tag, the person tagged and their friends may see your post. If you don’t want your post to be visible to the friends of the person tagged, you can adjust this setting. Simply click on the audience selector next to the story, select Custom, and uncheck the Friends of those tagged and event guests box.

How do I control who sees posts and photos that I’m tagged in on my timeline?

To choose who can see posts you’ve been tagged in after they appear on your timeline:

1.    Click at the top right of any Facebook page and select Account Settings

2.    In the left-hand column, click Timeline and Tagging

3.    Look for the setting Who can see posts you’ve been tagged in on your timeline? and click Edit to the far right

4. Choose an audience from the dropdown menu

You can review photos and posts you’re tagged in before they appear on your timeline by turning on timeline review. Keep in mind, photos and posts you hide from your timeline are visible to the audience they’re shared with other places on Facebook, such as in News Feed and search.

How can I turn off tag suggestions for photos of me?

To choose who sees suggestions to tag you in photos:

1.    Click at the top right of any Facebook page and choose Account Settings

2.    Click Timeline and Tagging from the left-hand column

3.    Under the How can I manage tags people add and tagging suggestions? section, click Who sees tag suggestions when photos that look like you are uploaded?

4. Select your preference from the dropdown menu

When you turn off tag suggestions, Facebook won’t suggest that people tag you when photos look like you. The template that we created to enable the tag suggestions feature will also be deleted. Note that friends will still be able to tag photos of you.

Well, these were the basics.

If you want to learn more either visit http://www. facebook.com/help/privacy alternatively, you can do a google search and you will find several useful links to help you on this issue (not only for facebook).

Disclaimer: The purpose of this write up is to spread awareness, promote ethical and safe computing practices and share knowledge. This write up does not seek to discredit or malign any particular person, corporation or business in any manner what so ever.

Mobile Payments — the future trend

fiogf49gjkf0d
This write-up discusses some of the prevailing trends and products available for making payment by using a mobile phone. While there is a lot of similarity in the payment process, there are subtle differences in technologies used and accompanying advantages/ disadvantages. This write-up seeks to highlight some of the differences.

To say that the advent of mobile telephony in India has changed the lives of countless millions would be stating the obvious. Today, mobile phones are not just a means of communication, but they are much more. I am sure, neither Alexander Graham Bell (who invented the telephone in 1875) nor did Dr. Martin Cooper (who is credited with designing the first practical mobile phone back in 1973) ever imagined that one day in the future their invention would be used to:

  • Flash1 one’s status (funky, snooty, VFM)
  • Collect memories (photos)
  • Stay connected (Facebook & Twitter)
  • Keep updated (news, alerts)
  • Entertain (music, video)
  • Transact (m-commerce)
  • Influence people (Obama’s election campaign) 

Be that as it may, today, mobile phones are an integral part of our day-to-day environment and (at the cost of repeating myself2), their importance/ our dependence on this marvel of technology is growing by the day. Today, the phone has become the hub for all our activities, from e-mailing and browsing to paying bills and transferring money. In fact, mobile phones are fast replacing your credit/ debit/ATM cards (Plastic money) as a convenient mode of transacting. For the uninitiated, please watch the recent ads put up by Airtel, Indusind Bank. There are several active players3 and they offer the same or similar services, for a charge (of course). Here it is important to understand what is on offer, and then pare down expectations accordingly.

How does a mobile banking/wallet work?

Mobile banking (not to be confused with phone banking) allows you to conduct financial transactions on your phone just as you would at a bank branch or through Net banking. Banks are now evolving this facility as they launch innovative products (this sometimes entails installing an app on your phone). In the mobile banking segment, all telecom companies have tie-ups with different banks that allow you to avail of banking services.

  • The process is pretty simple, and the steps could be something like: Register with the service provider: Open an account with the concerned bank or telecom company.
  •  In case of a bank — register for Net banking.
  • Use a Java-based phone4.
  • Activate GPRS services on your connection, so that you can access the Net5.
  • Install the banks phone app.

To transfer funds, you will have to:

  • Log in using the bank’s app menu and input the mobile phone number or bank account number of the beneficiary.
  •  Message the PIN you receive from the bank to the beneficiary who will also receive a secret number.
  • The recipient will have to log in with both PINs at the ATM to withdraw the money.
  • If the funds are being transferred to a bank account, it will take about four working days.

Practical applications:

IndusInd Bank’s cash-to-mobile service enables customers to transfer money to anybody, including those who do not have an Indusind Bank account. A bank customer is required to download the bank’s app on his phone, and then put in the phone number of the person to whom he wants to send the money, along with the transaction amount. The bank sends a message to the remitter and the beneficiary, along with different PINs to each. The remitter is required to message his PIN to the beneficiary, who can then use both PINs and his mobile number to withdraw cash from an IndusInd Bank ATM. The service is free, but operator charges would apply. Also, the sender will need a Java-enabled handset. Airtel Money has a different offering.

Airtel Money can be used on any mobile phone, and you can register for it by dialling *404# or at an authorised Airtel Money retailer. There are two types of accounts. The first one is an express account, wherein you can load Rs.10,000, and use it to pay utility bills or for booking rail/flight tickets on travel portals. The upgraded version is called a power account, which can be loaded with amount up to Rs.50,000. This can be done through Net banking or an Airtel Money retailer.

Charges?

There is a minimum fee for each transaction. For instance, a transfer of up to Rs.500 will cost Rs.5, while higher transactions and up to Rs.10,000 will entail a fee of Rs.10. Under mobile banking, apart from the transaction charge, one also pays Internet charges and SMS charges to the service provider.

Other considerations:

The Reserve Bank of India (RBI) has capped the transaction limit to Rs.10,000 for all essential services like ticketing, utility bill payments, etc. For non-essential transactions, the limit is set at Rs.5,000. There is also a ceiling of Rs.50,000 for loading the wallet.

While online banking has picked up pace, mobile banking is currently subdued. One reason for this is that whenever a new technology is introduced in the market, it takes time for people to familiarise themselves with it, which is why the growth is slow. Phone technology is another problem area, as there are different platforms of mobile banking for different phones. Also, let us not forget the whole business of bandwidth — all these applications need secure and good connections.

 Presently, most banks have decided to take one step at a time. They are not pushing hardcore banking services, but only presenting mobile banking as an enquiry tool to entice customers to carry out transactions. For example, SMS alerts for bill payment may tempt you to pay the bill through the phone itself.

What’s in store for the future?

Notwithstanding the above, the advent of smart phones has definitely spelt good news for the mobile banking segment. Why? For starters, the younger generation today prefers to use mobiles more than PCs. Secondly, statistics7 suggest that there are approximately 13 million Internet users in the country, as against 911 million mobile phone users. Obviously, the numbers would justify future trends and investments.

This decade belongs to mobile telephone, and the use of phones (smart or otherwise) is going to be the trend of the future. Until then, bon chance.

1    On April 3, 1973 Dr. Martin
Cooper did show off to his rival Joel Engel, head of research at AT&T’s
Bell Labs by placing a call to him while walking the streets of New York City
talking on the first Motorola DynaTAC prototype.

 2    Refer to this feature in the BCAJ March 2010.

 3    Airtel, Oxicash, Paymate, ICICI, Citi, Indusind,
etc.

 4    Not required for Airtel Money.

  5    Not required for Airtel Money.

6     This is based on the
information available in public domain, there may other charges/conditions.
Readers are expected to do their own due diligence before subscribing to the
service.

   7  Released by TRAI in February 2012.


levitra

Google Hangout – III

fiogf49gjkf0d
About this write up: This write up is the 3rd part of the series of articles on Google Hangout. This write up focuses mainly on some of the more popular instant messaging apps. The article briefly describes some of the features of these apps and highlights how hangout appears to have an edge over its peers. This article is the third and final installment of a series of articles on this topic. The first write up dealt with the telecom ecosystem and the different messaging apps/options available to users. The write up also dealt with the rise and fall of these apps/options over time. The second installment mainly dealt with the apps like SMS and BBM and why they are losing momentum. In this write up, we will briefly look at the current favourites in the instant messaging apps space and how they compare with Google Hangout (or vice versa for that matter).

Popular instant messaging apps

The previous write ups have dealt in brief why instant messaging apps became popular. Some of the key factors were:

Cost factor: Short Messaging Services (i.e. SMS) became a rage during the time period when the cost of voice calls were sky high. Their popularity started declining when the telecom service providers started reducing the voice call rentals. As a matter of fact, the general perception today is that it is cheaper to call then to send an SMS especially when it cost 1p per second and 1 minute would cost Re. 1/- as against Re. 1 for just 140 characters /SMS.

Instant Communication: The fact that the message would be delivered instantly – almost anywhere in the world – to the persons phone was a huge advantage over emails. This was true before the Blackberry boys came in and before the smart phones joined the race. Even today, a good majority of the population prefers instant messaging to emails. To be candid, I can’t even recall when was the last time I shared a joke or a personal message with my friends or dear ones on email. As a matter of fact, not a day goes by when one of my colleagues or friends, etc. share that whatsapp, etc., have made it so much easier to connect with family members.

Ease of use: This perhaps is one of the most important factors, especially when seniors are concerned. The younger generation has always been known to be tech savvy and have the uncanny ability to adapt to the latest technological development. One would say that the younger generation thrives on the changes. As against this, the seniors find change unnerving, they prefer the security of the old, tried and tested. This is even a bigger hurdle when they have to take a number of steps to achieve the same goal. Instant messaging has changed that significantly. To give you a simple illustration, if you are using whatsapp and you create groups and include your parents, it gives them an opportunity to know what’s going on, etc. There is a small illustration later in this write-up on this.

Informal communication: This is another reason why instant messaging is very popular is that emails generally have been associated with formal communication as against this instant messaging is perceived to be less formal and mostly casual.

Mass reach: If one compares instant messaging with voice calls i.e. alerts for charges on your debit card, reminders for utility payments, etc. – which would you prefer. My vote would certainly go for instant messages – they are far less intrusive. Imagine receiving a telephone call everytime a charge was made on your card or a utility payment was due – one more voice to nag you….

That being said, let’s move on to the apps which are popular:

Popular instant messaging apps:

Whatsapp:
This one is my favourite. In fact I wrote an article recommending this app in the BCAJ. It is one of the apps (out of 75 on my phone) for which I have paid money (it’s free now) (have only 5 paid apps 70 are free).

This app is quite efficient. Apart from allowing you to send text messages, the user can also send photos, videos and sound files (this was added after we chat came on the scene). This app will help you save a lot of money on the phone bill (especially if you have an unlimited data plan). Some of the other useful features include group messaging, sharing location, time stamp. What I particularly like about whatsapp is that

• it works on a simple GRPS connection as well as a WIFI (no need for a data plan)
• I don’t need to add contacts separately (unlike BBM)
• Even if I change my phone, new messages will come to the new phone, even if I don’t have anyone’s PIN
• It works on all popular devices/operating systems

We Chat:
This is app is fast gaining popularity and there are several ads being aired on almost all channels. The biggest plus is that apart from texting (and the ones described above), users can also send voice messages.

To be honest, I don’t have much comment or experience in using this app. There were a couple of turnoffs however:

• One needs to register an account with we chat
• Why bother sending a voice message – just call
• Chinese ……snooping….

Skype:
Has been around for several years now, recently bought over by Microsoft. Quite popular even today. It is available on the desktop as well as on the phone. This was popular because it gave the users the ability to have a real time voice conference (one to one or one to many or many to many). Many seniors use this to talk to their dear ones living around the world. Once again, I don’t have much comment or experience in using this app. There were a couple of turnoffs however:

• The app is very resource hungry – takes a lot of space and RAM when in operation

• Voice quality is decent but the video is often grainy and jerky (could be a bandwidth or a hardware issue on either side – did not face the as much in google hangout though)
• Need to register an account. You could call on the phone but (I think) you have pay charges for this facility

Viber:
This app is also quite popular. The biggest plus is that it allows real time voice calls. Have tried this from my phone, there is some time lag but the voice clarity is pretty good (even on GPRS). The app gives you the convenience of group chatting and alerts you as and when users download and activate it on their phone (Whats app doesn’t give an alert). It is fairly popular and in many ways scores over skype due to ease of use and speed. Unlike skype, it doesn’t offer video chat. Recently, they have started offering a desktop version.

Google Hangout:
Google has taken its time testing this app….. moving from google chat to google talk and now hangout. This app works on most smart phones (desktop — its already linked to your gmail account). The pluses are that it allows you to send text messages and hold video conference. Have tried it a couple of times and when compared to Skype and Facetime (iPhone/iPad specific), the video quality is somewhere in between (better than skype but still miles away from facetime). Just last week, I was trying to get on a video chat with someone located in Canada and after 10 minutes of skyping he said why don’t we switch to google hangout it some much better. I think that more or less summed it up for me.

The next write up will focus on what google hangout has to offer and what the future may have in store for users. It will also be the concluding part of this series. Do look forward.

Disclaimer: The purpose of this article is not to promote any particular site or person or software. Further comments about various products and services are based on the user experience related information available in the public domain. There is no intention to malign any product or service in any manner whatsoever. The sole intention is to create awareness and to bring in to limelight some thought provoking content.

Google Hangout – II

fiogf49gjkf0d
At the outset, I would like to mention that when I started penning the series titled Google Hangout, I had intended to cover more than just the features of the Google Hangout app. The intention was to build a bit of background and create awareness of where we were and where we are heading. While there is a growing perception that Google Hangout may be a game-changer, there is very little awareness as to what are the dynamics involved. In this series on Google Hangout, I have endeavoured to bring out some of the facts which I thought would put things in perspective. The concluding write-up of this series will cover some of the features which suggest that Google Hangout will change the way we communicate with each other.

About this write-up
: This is the 2nd part of the series of articles on Google Hangout. This write-up focuses mainly on the events before Google Hangout was put up in the public domain and how these events directly or indirectly will influence the things to come—one of which is the success of Google Hangout as an instant messaging app.

The previous article briefly discussed the events related to the development of the app-based ecosystem and the rise and decline of various players in the arena of instant messaging apps. In this write-up, we will discuss some of the popular instant messaging apps, which are relevant today, but in the foreseeable future may become ‘also rans’.

The previous write-up briefly touched upon the early events leading to the advent and subsequent developments related to the instant messaging ecosystem. The previous write-up also discussed why Short Messaging Service (SMS) became popular. Thereafter, BlackBerry Messenger (BBM) came into the picture and shook up the world. Today, the scene has changed; the tide has turned for BBM, which is fast losing ground to newer and more nimble players in the market. The following paragraphs, are some of the facts which help the readers understand some of the key factors at play.

The rise and fall of SMS

As discussed in the previous write-up, between the years 2000–2005, SMS was a popular means of communication. The (prohibitive) minimum cost of a voice call was the primary reason. But as time went by, technological advancements, easy and cheap access to mobile telephony, drop in the minimum cost per call, etc., led to the decline in the popularity of SMS as a means of communication. Once voice call rentals started to fall, users started realising that there were disadvantages to using an SMS, i.e., other than the difference in the cost of an SMS vs. voice call, factors such as limit to number of characters per SMS, limitations of the keypad on the phone, perceived need for rich media compared to plain vanilla text, abuse of the SMS system by mass advertisers, etc.

One may say that the role of SMS as an enabler of instant communication reached its peak when it became the de facto means of mass communication. At its peak, SMS was used to exchange greetings during festivals like Diwali, Christmas and Id; banks and other service providers sent updates of transactions related to money transfers, credit card use, bank balance; users exchanged daily SMS’s (containing jokes, positive thoughts, etc) on a mass scale. A whole new ecosystem had spawned due to mass SMS-ing.

While mass SMS-ing capability was the bright side, there was a dark side too. Mass advertisers started targeting a number of mobile users for mass messaging. Consumers across the country started receiving (mostly unwanted) messages. These would range from offers to supply services of a plumber, AC repairmen, to selling insurance, stock trading tips, so on and so forth. Somehow this development seemed inevitable. Mass marketers then found that their calls to mobile phone users (for selling various products and services) were being ignored due to the caller ID facility. Mass advertisers realised that while a call could be ignored, there was no way to stop someone from sending an SMS. The abuse became so rampant that the Telecom Regulatory Authority of India (TRAI) was forced to clamp down hard. TRAI imposed several restrictions such as the National Do Not Call Registry, imposed requirement to register mass advertisers and enforced a limit on the number of messages per phone number per day, among other diktats.

Thus, the fate of SMS as a means of instant communication was more or less sealed. Today, savvy users consider SMS-ing not only an expensive option; they also find it to be a limiting factor when they want to reach out to their ever-expanding network of friends.

The rise and (eminent) fall of BBM
During 2005–2010, i.e., right about the time that the reign of SMS was nearing its end, BBM started gaining ground as the de facto means of instant messaging and communications. The BlackBerry (BB) device was already quite popular as a smart device for official communication (i.e., emailing). With a growing number of users and easy access to the BB data services, BBM started covering ground lost by SMS. By 2008-09, BBM was already accepted by the corporate world as a reliable instant messaging service. In the BB world, email was the official means of corporate communication and BBM was the unofficial yet cool means of communication. Users formed groups and used BBM to exchange jokes, positive messages, etc., just like they had used SMS in the past. The advantage that BBM offered was zero cost. As pointed out in my last write-up, while the BBM service itself was free, one would need to purchase a BB (approx cost 18K plus) and pay for the data charges. Another relevant point was that right up to 2007-08, users would restrict their BB subscription to mail and data services, voice calls were used sparingly. Apparently, at that time, BB had not been permitted due to which the cost of a voice call was far too high. Soon thereafter, various Indian telecom players started offering BB services (data & voice) at an affordable cost. Even then a user would have to purchase a BB device, the cost of which was quite steep for the common man.

Post 2008, a series of changes took place:

• Increase in the penetration of mobile technology— widespread usage across the country
• Advent of global players in the telecom sector
• Falling rentals for voice calls
• Introduction of 3G technology
• Easy access to Internet, through smart phones
• Introduction of better quality of smart phones
• Rise of the iPhone

While one could argue both ways on all of the above factors, it remains an accepted fact that easy access to the Internet and availability of cheaper technology, i.e., both hardware as well as software, were the key factors to the upheaval that was to come. By 2008, Nokia had already started losing ground to BB devices. It was no longer a “status symbol”. At that time, users (and to a great extent Nokia too) started realising the perils of not keeping up with the changing times. Users realised that Nokia’s Symbian-based phones could not match up with increasing end-user expectations, mainly related to emailing and access to the Internet. Also, BB was uniquely positioned because it offered a full QWERTY keyboard. While other device makers did try to play ‘catch-up’, they had already missed the boat.

BB’s troubles really began to surface after the introduction of the iPhone 4. It was slick, userfriendly and what many industry watchers would say ‘path-breaking’. There were several reasons for and against the iPhone, some of which are:

• It was expensive but users felt it was worth it.

•    While users were restricted to the iOS and the iTunes environment, these environments themselves provided for so much that one did not feel the need to look beyond that factor. As a matter of fact, the feeling was that none of the other players provided so much.

•    There was a paradigm shift in the user interface environment. While a standard number pad was considered as a serious limitation, the famed QWERTY keyboard and track ball/pad also seemed to be laborious when compared the iPhone’s touch-based interface.

The dynamics were completely stacked against the QWERTY keyboard when Apple introduced Siri, its much touted voice-based interface.

•    The ease in Internet access gave much needed succour to Internet-dependant apps like Google Talk and WhatsApp. (Here I would confess that I started using an iPhone just about then— around December 2009, and at the instance of my mentor, installed WhatsApp. To be candid, I was more than happy to see my phone bill go down due to the lower number of SMS’s).

•    The Apple app store made sure that more and more (free as well as paid) apps kept cropping up. Users were spoilt for choices. It was only a matter of time that they realised the limitations in the offerings of BBM.

•    The increasing popularity of the apps market place and introduction of iPhone clones was a strong sign that BB, and as a natural consequence BBM, would see that its days were numbered.

•    Many IT players saw the growing popularity of instant messaging apps and felt that there was a gap between what BBM had to offer and what the consumers at large were expecting. Thus, WhatsApp, WeChat, etc., came into the market. Text-based messaging was destined to be a thing of the past. People were already expecting more. Between WhatsApp and We-Chat they got to sent voice-based messages, videos, pictures, map locations, etc. It seemed that BBM was already gasping for breath at that time.

While there are several other facts which one would want to consider, I think it would be suf-ficient to say that players like Nokia (which recently decided to sell out to Microsoft) and BB (taking losses, likely to cut approx 5,000 jobs, considering a sellout, recent announcement that it would of-fer BBM on Android phones) are feeling the heat or as one can say, are dropping out of the race.

The next write-up will be about the popularity of apps like Skype, WhatsApp, WeChat, etc., and how these apps (like their predecessors) are likely to face competition from Google Hangout—the new kid on the block.

I wish all the readers the best of luck with the tax audit season.

Disclaimer: The purpose of this article is not to promote any particular site or person or software. Further comments about various products and services are based on the user experience related information available in the public domain. There is no intention to malign any product or service in any manner whatsoever. The sole intention is to create awareness and to bring into the limelight some thought-provoking content.

Google Hangout – I

fiogf49gjkf0d
About this write-up:
Mobile phones have pervaded almost every aspect of our life, be it in the personal space or in the work environment. This is true in so many ways. For instance, most people shudder at the very thought of what would happen if their mobile phone stopped working or was not with them, even for a single hour or a day . There are several reasons for this and mobile apps have made a sizeable contribution in this regard.

While there are several apps which are capable of a variety of functions such as downloading information, music, video, storing and sharing, etc., one of the most notable category of apps which has really improved the user experience are the instant messaging apps. These apps have changed the landscape of mobile telephony and messaging. Google Hangout is the latest entrant in this arena.

This write up briefly describes some of the features / capabilities and how this app would be useful to the readers of this magazine.

Introduction:
Mobile phones have pervaded almost every aspect of our life, be it in the personal space or in the work environment. So much so that most people find it difficult to imagine what would happen if their mobile phone stopped working or was not with them, even for a single day. There are several reasons for this and mobile apps have a sizeable contribution in this regard. There are several apps which are capable of a variety of functions such as downloading information, music, video, storing and sharing all these. However, one of the most notable categories among these apps, which has really improved the user experience is the category related to instant messaging. These apps have changed the landscape of mobile telephony and messaging.

Instant messaging apps started off with a basic text option, gradually moving on to audio and now finally, they have started offering video options also. This write up briefly describes some of the apps and highlights the features of the latest entrant on the scene i.e. Google Hangout.

Background:
Some of you may recall, just about a decade ago (2000 – 2003 types) the closest thing we had to instant messaging back then, was ICQ chat or the Yahoo Messenger or the AOL messenger. These were quite popular and hip. But when you think about it in hindsight…there was a catch… all of these applications were built for desktops/laptops. Ergo, these apps were instant only when you were in front of a PC. But that’s how technology was back then and most people found it useful. As a matter of fact, there are still remnants of those days i.e. Google Chat and Yahoo Messenger are still in use (am not saying popular). In most cases, they have been merged with the email account.

At that time, mobile apps were non-existent at that time. This was partly due to the fact that owning a mobile phone was a luxury for many Indians. Mobile technology was in its nascent stages and quite expensive. The closest thing available to instant messaging back then was the Short Messaging Service or as it was popularly called SMS. But those days were different. Back then, SMSes were either free or used to cost a pittance (at least as compared to the cost of a voice call). But like all good things, like the telegram service and before that the pager service, SMSes too are fast becoming a redundant mode of communication. While this may seem abrupt to many, it isn’t so. Read on to know why

The beginning of the end of text messaging:

One of the first nails in the coffin was put in by the Blackberry Messenger Service (“BBM”). Back in 2006, Blackberry devices (“BB”) were a rage. Then, in 2007-08 (approx), the BBM service was launched. The instant messaging landscape changed completely soon thereafter. By 2010, the popularity of BB and the BBM scaled new heights. And rightly so. After all it was easy to use, instant and most importantly free of cost (i.e. not counting the cost of the BB and the data plan).

At that time, BBM had no competitors. There was a huge void between the BB and all other devices (mainly Nokia, HTC, Sony, Motorola). BB was riding a high. However there was one downside (at least for the users) – the catch was that you needed to own a Blackberry device. That itself was not a small catch, given that each BB device would cost near about 18k plus was a major limitation.

Near about that time Google Talk made its advent. While there were early adopters, reports in the public domain suggest that Google Talk didn’t really dent BBM’s hold on the market. There were several reasons for this. Some of which could be listed as under:

• Available smart phones (not very smart, really speaking)

• Supporting operating system

• (most importantly) Availability of bandwidth (i.e. ability to access internet through the phone).

I know there was Wi-Fi, but come on … really… the users would be able to access Wi-Fi at limited placed… is that really mobile.

Near about that time, a series of products’/services’ launches were announced. Some of the notable ones are:

• Launch of the iPhone 3, 4 and 4S

• Use of 3G & 4G technology

• Itunes and the app market created around the iPhone ecosystem

• The Qwerty keyboard lost its defacto status of standard interface to the touch based interface (no pencil required, as in the case of Palm and i-mate JAMin)

• Apple announced Siri – the new revolutionary voice based interface.

While these changes happened over a period of 3-4 years, in this time period BB slowly and steadily started losing its grip on the smart phone market. With it, BBM started losing its relevance as an instant messaging app.

iOS and Android ecosystem:
With the launch of the iPhone (iOS) and the Samsung S series (Android OS), there were two basic expectations of the customer i.e. easy internet connectivity and newer offerings in the form of apps and utilities. BB and Nokia had taken for granted their position and failed to innovate. What they missed was capitalised upon by Apple and then by Samsung. Their phones and the operating system started behaving like hosts capable of doing a lot more/beyond a simple phone, camera, music player, email, games offering etc. The phones offered a lot more interactivity and options to share.

Instant messaging:
Instant messaging was a part of the mobile telephone ecosystem from early 2000. It was a hit back then, mainly on account of the pricing differential and the convenience it offered. But as they say, time and tide waits for no one and the only thing permanent is change. With newer technology such as 3G, 4G, WiMax, LTE, etc, users had the chance to use media with richer features/content like images, short audio files and video. The type of files which in the past were not used because of the time taken to upload and download. The need of the hour was the development of apps that would piggyback on the cheaper internet technology (whilst avoiding the more expensive telephone option) and give the users a similar (in many cases better) experience. In the initial phases, developers focussed on developing apps which would allow the users to send SMS via the internet. While these did catch on, they didn’t really become mass products or a rage, as there were several limitations. Already the users were habituated to using software like Skype, Google Talk for online chats (audio as well as video) with developments like the iOS and the Android ecosystem, stripped down versions of these instant messaging software packages started entering the market.

Even these did not (really speaking) really achieve the lofty position of becoming the defacto standard (Skyype did have a hold but …). Part of the reason was that these software packages (not apps) were resource hungry and demanding. Add to this, there was a need for heavy bandwidth.

I did try using Skype on my i-mate JAMin (2006-09) but was terribly disappointed. Was forced to uninstall Skype after two attempts to use (and several attempts to stop my phone OS from hanging).

What this meant for an ordinary user was that not only did you need a very high end phone, you needed a robust operation system and the broadband network for effective usage (similar to a desktop environment). That’s when apps like Whats App, Viber, etc. entered the market. These apps were game changers.

My next write up will carry more information on why these apps became game changers and what were the reasons for the same.   

Until then…. cheers

Disclaimer: The purpose of this article is not to promote any particular site or person or software. Further comments about various products and services are based on the user experience related information available in the public domain. There is no intention to malign any product or service in any manner whatsoever. The sole intention is to create awareness and to bring into limelight some thought provoking content.

Enough !

Computer Interface

Information is rushing out to you from all directions —
letters, newspapers, magazines, phone calls, voice mails, SMS, twitter tweets,
facebook alerts and of course — emails on Blackberry and your PC.

There is enormous amount of content flowing in from all
directions. Just look at the developments at Twitter and other social media.

It was named Twitter by its founder, Jack Dorsey, because the
company wanted to capture the feeling of buzzing the world. With a limit of 140
characters, no one could have predicted the success of a communicator of
‘inconsequential information’.

It is estimated that 7.8% of Twitter users come from India,
making India the number 3 user of Twitter after the US and Germany.

Many famous personalities have a large number of followers on
Twitter. Shashi Tharoor, Minister of State for External Affairs has around
3,00,000 followers !

Youtube claims to have more than 350 million monthly visitors
and more than 3 billion photos and videos have been tagged on Flickr.

And what about emails ? For any professional around the
world, it is THE most critical form of communication. If your email server is
down, you might as well take a coffee break !

But, how do you manage the large traffic of emails which
inundates your inbox ? Do you feel overwhelmed ?

By pushing emails into handheld devices like Blackberry, the
ease of access has increased. However, it has also created some societal
problems. Children who are desperate to regain their parents’ attention from
Blackberry are often called ‘Blackberry orphans’. In some cities in Canada,
citizens have been requested to voluntarily turn off their Blackberry after 6.00
p.m. in the evening.

There is a way to manage this information overload.

Technology can help. There are software tools to help you
manage your inbox. It can prioritise your emails by importance, sort them by
senders and filter them. These tools can help you to personalise your email
settings.

Microsoft is now working on a futuristic application
(‘Priorities’) to sense work patterns and modulate the flow of emails — e.g.,
it will force a time lag in delivery of email depending on the urgency of the
email and the time of the day based on work pattern of the recipient.

Help is also available from personal productivity tools. You
can effect a change in your working habits. Some examples :



  •  Don’t start
    the day with powering on your PC and checking your emails. First, work out
    your plan for the day.



  •  Remove the
    mail alert. You will avoid jumping on to the Inbox as soon as you hear the
    alert, irrespective of the urgency of your other work.



  •  Make
    distinction between mails which are urgent, important and a combination of
    both.



  •  Have a few
    hours of ‘email downtime’ in a day when you engross yourself fully in your
    other work.


We live in a knowledge economy and information is the most
valuable commodity. Rather than be overwhelmed with it, you need to figure out
how to ‘tame the beast’. Enough !

levitra

Browsers — Part I

PowerPoint presentations

Computer Interface

There are many ways that PowerPoint can be used. Some are
common, some less so. In this write-up we will try to deal with some of them
with an eye on how they can help users. But as always, there may be more than
just what this list mentions, so don’t limit yourself to the standard uses
listed below. The more common uses of PowerPoint are :




  • Presenter-based slide show



  • Independent slide show loops



  • Informational kiosks



  • Interactive training/testing software



  • Web design



  • Combinations



Presenter-based slide show :

Most of the time, presentations are designed to supplement a
meeting. The meeting may be just a few people, or thousands. In this type of
show you have a person or people giving a talk to a group. Sometimes the
presenter will run the PowerPoint via a podium PC or a remote control, while at
other times a person will be dedicated to just running the PowerPoint, but in
each case the primary focus of the meeting is the presenter and the information,
not PowerPoint.

Independent slide show loops :

Sometimes an independent slideshow is used. This is most
common at mega events, wedding receptions, anniversaries and reunions. This
style of PowerPoint presentation can also be used for company introductions,
product information, etc. Here the slide show is the sole focus and the
informational content will tell the whole story. Because there is no live focus,
the PowerPoint presentation will have to keep the viewers’ attention through the
use of graphics, sounds, animations and content, for instance, the electronic
scoreboard in a cricket stadium churning out animations at the fall of a wicket
or when Dhoni hits a six.

Informational Kiosk :

PowerPoint can also be used to run billboards, checkout line
advertising, information centre displays, and even trade show info booths. In
some cases there will need to be information collected from the viewer (for
post-meeting follow-up) and in others, self-updating information (weather, stock
reports, event scheduling). Drill down information may be available by having
the viewer touch a button on the screen or click on a button. This allows a
viewer to select what information they are interested in.

Interactive testing/training :

PowerPoint is a great testing program and can be either
web-based or machine-based. A single user or group is shown a question and must
respond to advance the presentation. The presentation may branch to different
learning paths depending on the users’ choices, giving additional information
for areas where the users do not answer correctly. Often the scores are recorded
for later evaluation.

Web design :

PowerPoint can be used to design web-based presentations.
These can be exported to a code that is more web-friendly (HTML), but is limited
to the abilities of the users’ browsers. It can also be used to supplement a
web-based meeting, similar to a presenter-based slide show. While PowerPoint can
be used to design a website from scratch, it is not the best tool for this job.

Combinations :

Most of these groups are not exclusive, meaning that you may
combine aspects of one with aspects of another. In this way, PowerPoint may
become what you need it to be.

Planning a PowerPoint presentation :

The first step always, always, always, in planning a
PowerPoint presentation should be to turn off the computer. OK that was meant to
be a joke. Let’s take a step back and collect some of what you know by answering
a few questions :

  • Who is this presentation for ?
  • Who is your intended audience ?
  • What type of presentation method is best suited for this type of audience ?
  • What should have the audience’s attention ?
  • When is it needed by ?
  • Will this be a one-time or a presentation that will need updating regularly ?
  • Who or what am I dependant on to complete this on time ?
  • Who is responsible for the presentation content/script/storyboard ?
  • Will it need to run on all computers, a specific computer or my computer ?
  • What version of PowerPoint do I have (or will the other computers have) ?
  • What basic steps can I break up the project into?

The first question leads to the second, which should answer the third. This is the most critical part of the show-building process. Write it down if you have to and tape it to the monitor, but knowing your audience will help everything else fall into place.

It’s not that presentations are used in the business scenario only. There are non-business uses also, for example:

You can do a Power Point photo show for a birthday or an anniversary, wherein a photo album type loop will run during the whole party. So, you know that your audience is family members and friends, it should run as an unassisted Kiosk loop, that will be one of several focuses for the party as people drift over to watch it for a bit. You also know that the anniversary party is in 5 weeks, and will be a one-time show. You will need to get pictures from dozens of relatives, and will need to decide yourself which ones get included and what music to set it to, but she wants to see it before the party. It will need to run on their computer, which has Power Point 2003, but will also be distributed to anyone that wants a copy. You have her permission to ask for some help from your cousins with the following steps: collecting pictures, sorting pictures, scanning pictures, inserting pictures into slides, rearranging slides, finishing presentation, copying to CDs, labelling CDs. Wow, this is a lot of information, but it defines what you will need to do.

In the next write-up we will cover how to power your presentations using animations.

You can post your comments to me on sam.client@gmail.com


Kal, Aaj aur Kal

Computer Interface

In the previous article I wrote about technological innovations which shaped our present. Continuing from where we stopped, we know what the past is …. what everyone wants (usually) is what’s the future gonna be like ? . . . . well I can’t tell you that, but what I can talk about is what to look out for.

Some of the trends (according to me) to watch out for are :

  • Open document formats

  • XBRL

  • Mashups

  • Virtualisation

  • Convergence

Open Document Formats (‘ODF’) :

They say change is inevitable and with newer versions of software released in the market, we migrate to them like fish to water. But the trouble with change is that not everyone can change at the same pace and when that happens, then you are faced with the question posed in the movie — whats wrong with the old format, why do we have to change — so on so forth….

Using open standards like ODF ensures that the users’ information is accessible across platforms and applications, even as technologies change. Organisations and individuals that store their data in an open format avoid being locked in to a single software vendor, leaving them free to switch software if their current vendor goes out of business, raises its prices, changes its software, or changes its licensing terms to something less favorable for the user. Adoption of open standards is particularly important for governmental applications because it can effectively ensure that a government document saved today will not be technologically locked tomorrow.

ODF is likely to become a whole lot bigger in future (to learn more about open document formats read my write-ups in the Jan-Mar 2006 and May 2008 issues of the BCAJ).

eXtensible Business Reporting Language (‘XBRL’) :

As Chartered Accountants one thing we know better than most people is that — compliance is a huge part of our practice. We are always faced with the issue of shrinking deadlines and ever-increasing requirement to report information. Such reporting may not be limited to tax filings — it would also extend to financial, legal, statutory reporting, etc.

Part II

But reporting is one thing and analysing and interpreting i.e., using the reported information, is another thing. Interpretation and analysis is the real deal and as we all know, unless everyone follows the same set of rules the interpretation and analysis could lead to different results. In general we usually report the information in a generalised/resultoriented summary. These summaries are usually accompanied by a whole lot of notes (to help the user understand the summary). Nonetheless, users still spend humongous amounts of time trying to normalise data.

XBRL stands for eXtensible Business Reporting Language (XBRL), an XML-based technology standard, it is a language for capturing financial information throughout a business information process that will eventually be reported to shareholders, banks, regulators, and other parties. The goal of XBRL is to make the analysis and exchange of corporate information more reliable and easier to facilitate, in that it can help business in increasing the business value and provide reliable, and transparent financial data. The adoption of XBRL may permit stakeholders to access, compare and analyse data in ways that are at this time impractical or unreal. The reason is that the language is robust enough to boast of capabilities like :

  • Drill-down facility for abridged data
  • Reduced preparation time, effort and cost

  • Enhanced analytical capability
  • Standardised and simplified international access and acceptability

  • Platform neutrality ensures wider acceptability

  • Leverages the efficiencies of the Internet.

I am looking forward to the day that we will be able to file (upload) tons of information at a single click— hopefully this experience will be less painful as compared to what we face today. (learn more about open document formats — read my write-up in the Jan-Mar 2006 and May 2008 issues of the BCAJ).

Mashup tools :

In web development, a mashup is a web application that combines data from one or more sources into a single integrated tool. An example of a mashup is the use of cartographic data from Google Maps to add location information to real estate data, thereby creating a new and distinct web service that was not originally provided by either source. Mashups and meshups are different from simple embedding of data from another site to form a compound page. A mashup or meshup site must access third-party data and process that data to add value for the site’s users. Mashups typically ‘screen-scrape’ or use other brute-force methods to access the untyped linked data; meshups typically use APIs to access typed linked data. A mashup or meshup web application has two parts:

  • A new service delivered through a web page, using its own data and data from other sources.

  • The blended data, made available across the web through an API or other protocols such as HTTP, RSS, REST, etc.

Our methods of collecting and sharing information have evolved over a period of time. Mashup tools hold the promise of an intelligent information collection as well as a collaboration tool. Watch out for more on this tool.

Desktop  virtualisation  :

Desktop virtualisation is the decoupling of a user’s physical machine from the desktop and software he or she uses to work. Most desktop virtualisation products emulate the PC hardware environment of the client and run a virtual machine alongside the existing operating system located on the local machine or delivered to a thin client from a data center server. Virtual desktop infrastructure (VDI) is a server-centric computing model that borrows from the traditional thin-client model, but is designed to give system administrators and end users the best of both worlds: the ability to host and centrally manage desktop virtual machines in the data center while giving end users a full PC desktop experience. The user experience is intended to be identical to that of a standard PC, but from a thin-client device or similar, from the same office or remotely. Installing and maintaining separate PC workstations is complex, and traditionally users have almost unlimited ability to instal or remove software.

Desktop virtualisation provides many of the advantages of a terminal server, but (if so desired and configured by system administrators) can provide users much more flexibility. Each, for instance might be allowed to instal and configure his own applications. Users also gain the ability to access their server-based virtual desktop from other locations.

Advantages:

  • Instant  provisioning  of new desktops

  • Near-zero downtime in the event of hardware failures

  • Significant reduction in the cost of new application deployment

  • Robust desktop  image management  capabilities

  • Normal 2-3 year PC refresh cycle extended to 5-6 years or more

  • Existing desktop-like performance including multiple monitors, bi-directional audio/video, streaming video, USB support, etc.

  • Ability to access the users’ enterprise desktop environment from any PC, (including the employee’s home PC)

Convergence:

This is one development that has been talked about for as long as I can remember. Telecom Media Convergence is about crossing multiple industries. Fixed, mobile, and IP service providers can offer content and media services, and equipment providers can offer services directly to the end user. Content providers are consistently looking for new distribution channels. Convergence is the combination of all these different media into one operating plat-form. It is the merger of telecom, data processing and imaging technologies. This convergence is ushering in a new epoch of multimedia, in which voice, data and images are combined to render services to the user. I am waiting for the day when 3G/WIMAX will be a common thing and your mobile phone will be more than just a communication device – it would be your TV, your travel guide, your office away from office (yikes – strike that out), your wallet.

Well that’s all for now. Just to share a small secret, as I was penning this write-up, it gave me a chance to go thru my earlier articles and it made me realise what seemed outrageous then seems like a no-brainer today, innovation is so much a part of our life today, that, changes in the last few years also seem so significant.

I will probably revisit this write up next March to see where we stand……….

Virtual Data Rooms — Part 2

Computer Interface

The previous write-up was about the importance of information
for decision making, specifically in mergers and acquisition. Sensitive
information can make or break the deal or tilt the scales either way. As
mentioned, the confidentiality is of prime importance, given the fact that the
target is laying bare his soul (literally). The dilemma is how to make
information available, simultaneously, to a selected large group of
individuals/experts, within the limited time and costs while maintaining control
on the flow and the use of the information provided. See picture 1.

Virtual Data Rooms (VDRs) are online repositories, providing
an infrastructure for uploading and sharing digitised data. These data rooms can
contain documents — files, letters, records and transcripts —but may also
include other relevant information in any form, from audiotapes to soil samples.
The data in the data room are resources that represent legal proof of the target
company’s asset value and reveal its earning potential and ultimately its value.

Before entering a data room, potential buyers typically have
a good understanding of the target and its business, and have a preliminary
opinion on the consideration they are willing to pay for the target. In these
cases, potential buyers inspect documents to discover hidden earning potential
that may be capitalised upon or to uncover hidden risks that are not publicly
known. The potential buyer sends its team of experts to verify their known
information about the target with the contents of the data room and to gather
new information.

The prime objective of review is to act diligently and verify
in detail the information presented by the target. In a well executed due
diligence process, an expert in the field would inspect each document in the
data room, regardless of whether the information is obvious.

A Virtual Data Room has several advantages over a Physical
Data Room such as :

Text recognition :

Offered by some VDR providers; allows text in scanned
documents to be recognised by a computer program, effective for searching and
spell-checking.

Search function :

A key feature of a VDR; enables users to search documents for
specific words and phrases, similar to Internet search engines. A significant
improvement over PDRs, where document searches are done using the document index
and are only document-level searches that do not allow for searching of specific
words and phrases.

Q&A function :

Buyers are permitted to ask sellers, questions related to the
data room and its contents, securely and efficiently. VDR users may ask
questions through the VDR screen interface by clicking on a ‘Q&A’ icon; some
VDRs may allow for routing of questions directly to the appropriate operations
team member. While asking and replying to a question, both buyer and seller
representatives may easily refer to the document in question by simply clicking
on its icon.

Audit trail function :

The target can in real terms track the documents, including
viewing access by frequency, date and user; enhances transparency of the data
room process. This gives the target the ability to profile and rank potential
buyers, based on their level of interest and indicates the most frequently
accessed documents; this is important in ascertaining the buyers that should
proceed to a second round of due diligence, which usually involves disclosing
sensitive company documents. In the event of legal proceedings or misuse of
confidential documents in the VDR, the audit trail provides proof that a certain
user acceses specific documents. Alternatively, the buyer may use the same tool
against the target if documents are not made available to the buyer.

Dynamic indexing :

Allows sellers to upload ‘late’ documents to the VDR by
efficiently placing them in their appropriate position in the VDR index; allows
the seller to quickly reorder documents in the index and to inform potential
buyers through email or SMS of changes to the index and data room contents. A
complete change of the index however may not be possible. This is a significant
improvement over the paper-based, manual indexing system and filing of PDRs,
which were prone to errors and sometimes resulted in buyers not being informed
of updates to data room contents.

Restricted use :

In a PDR, the data room supervisor has to physically manage
documents that may or may not be permitted to be copied; as against this, in a
VDR, digital documents are flagged as restricted with respect to copying,
printing, downloading or viewing. Further, restrictions may be placed on certain
portions of documents, and may allow for contingent restrictions, such as
allowing a legal expert to download only legal documents, but not financial
documents. Viewing restrictions may be placed on sensitive documents available
only during a second round of due diligence.

Watermarking :

A security feature for digital documents in a VDR;
watermarking is the printing of certain words (such as the user’s name) across
the face of a document as identification and allows tracking of the document in
the event of illegal distribution.

Variety of file formats :

VDRs can usually store files of varying formats, including
PDF, Excel, PowerPoint, Word, GIF, MPEG, JPEG, and TIFF, eliminating the need to
convert files to a specific file type or the VDR system will transform the files
into a specific format required by the system.

A basic SWOT analysis between the Physical & Virtual Data
Room can be summarised below :

Advantages of VDRs to buyer :

  • Cost savings
  • Tune savings
  • Comfort
  • Transparency
  • Fair playing field


Disadvantages to buyer:

  • Additional work
  • Competitive price
  • Reading documents online
  • System speed
  • Non-digital information


Advantages of VDRs to seller:

  • Simplicity
  • Ease of setup
  • Cost savings
  • Competitive price
  • Legal compliance
  • Time savings
  • Security


Disadvantages to seller:

  • Security


So the next time you are involved in a due diligence exercise, do make it a point to assess the positives and negatives highlighted in this write-up. Probably the next time you could add value by advising your client on how to manage risk in the process.

Windows 7

Computer Interface

This write-up is about Windows 7 operating system and the
objective is to highlight the pros and cons about the new operating system.
Generally users in their zeal to keep up with the latest technological
developments, rarely look ahead before leaping towards the unknown and
thereafter blaming others for their folly. This article has been written keeping
such average user in mind and for the record I consider myself to be an average
user too.

Once upon a time there was an operating system called Windows
95 which dominated most of the tech-world/desktop PCs. Windows 95’s dominance
continued when its new avatar i.e., Windows 98 took over. And then there
were others (Windows CE, Windows ME & Windows NT—CE ME NT?????), but none like
Windows XP. Windows XP was a wonderful desktop operating system and it ruled
over desktops for a very long time. Even today after it has (officially) passed
on the reins to Vista, Windows XP continues to overshadow its predecessors as
well as its successor. This is true because many buyers still ask for XP instead
of Vista. As a matter of fact many even sought to uninstall the preloaded
version of Vista in favour of XP. However, with newer applications and
technology being developed every day one needs to move on. With XP being more
popular than Vista and people preferring to downgrade from Vista to XP, what did
Microsoft do ? ? Did it infuse better code in Vista ? Was XP resurrected from
the dead?? . . . . on the contrary it announced the launch of Windows 7.

Microsoft has announced that the launch will be in the last
week of October 2009. In fact, Microsoft has already begun dishing out trial
versions to users in order to get a feedback and plug any bugs. The reactions of
the users however, have been guarded.

While other operating systems came and went, sometimes in the
blink of an eye, yet there were others like Windows XP which stayed firm. But
the question that begs to be answered is WHY ? ? ? An industry expert points out
that what many neglected to figure out was that Vista needed a different machine
and hardware to function properly at its optimum, not an XP designed machine.
It’s like when Bill Gates announced the launch of Windows 98, he said it was
faster than Windows 95 and better to use. What he failed to mention was that it
needed a different machine to work on and not the old machine itself. Oops, a
minor detail ! ! !.


Another fact about Windows Vista is that it gave
dedicated Windows users a tough time. For instance Vista takes up hogged
gigabytes of space, users had to interact with the machine saying ‘yes’ (as
someone put it) a million times before it starts to even contemplate copying a
small file from one place to another.

So what did Microsoft learn from all this (and after a couple
of million dollars down the drain) . . . looks like very little because (once
again) . . . they have forgotten to mention that e-mail, address book, calendar,
photo management, movie editing and instant messaging won’t be available with
Windows 7. These have to be downloaded from Microsoft’s website. What’s more, in
some cases, additional requirements are needed. For instance, the Windows XP
Mode requires an additional 1 GB of RAM, an additional 15 GB of available hard
disk space, and a processor capable of hardware virtualisation with Intel VT or
AMD-V enabled. OOPPSS ! !

There are news reports stating that while designing Windows
7, users were asked, if it were up to them, how would they make XP better ? What
would they want from a new OS ? The feedback was anything that combines
simplicity, sleek design, ease of operation and interactivity sits pretty much
at the top of a ‘to own’ list. The result ? Windows 7. In fact, the marketing
hype from Microsoft says, “To create the next generation OS, which would make
everyday tasks faster and easier and make new things possible, Windows 7
simplifies things with a more streamlined design and one-click access to
applications and files. It has a faster boot-up and shut-down time and comes
bundled with improvements in terms of reliability, battery life and fewer
alerts.” What’s more, Windows 7 promises not only to be faster but in fact
intuitive. Features like multi-touch, JumpLists and HomeGroup have been built in
to enable consumers to interact with their PCs in faster and more intuitive
ways. Enhancements to the Windows taskbar, JumpLists and search are designed to
make navigation much easier. Also, InPrivate browsing in IE 8 prevents browsing
history, temporary Internet files, form data, cookies and usernames and
passwords from being retained by the browser. Controlling the computer by
touching a touch-enabled screen or monitor is another core Windows 7 user
experience.

But the fact is that these features were always available in
MAC OS. For those who’ve used Apple’s systems, it’s easy to see what has been
borrowed. The taskbar looks and works like the Mac OS X’s Doc :
big square icons of your favourite programs. Other Apple borrowings include the
sticky notes programe, multi-touch gestures like rotating an image by twisting
your fingers and pinch to zoom. Aero Shake allows you to get all but one window
out of the way. One needs to grab the top of that window, shake it and all the
other open windows minimise to the taskbar. Shake the window again, and they all
pop back on screen.

Didn’t we see something similar in Vista as well —it was felt
nice initially, but then eventually I used to turn it off because the feature
would either slow down the PC or would result in disrupting other programs.
Whats the point of an enhancement which cannot balance user experience with
costs and performance degradation issues.

Another feature in Windows 7 is Snap. With Snap, one can
simply grab a window and pull it to either side edge of the screen to fill half
of the screen. If one wants to quickly see gadgets or grab a file from the
desktop, all one needs to do is move the mouse to the lower right corner of the
desktop. Peek makes all the windows transparent and one can view the desktop.
Windows Flip is a feature similar to Mac OS X’s Expose. The rate at which
they are borrowing the feature
may be… one may want to wait for the Snow
Leopard (new OS announced by Apple) before switching to Windows 7.


There is also the issue of cost of the software. The cost remains unclear, though initial reports had indicated that the estimated prices for the full Windows 7 package in the US for the premium, professional and ultimate versions would be $ 199.99, $ 299.99 and $ 399.99, respectively. The hype however says that “Firstly, the price for the retail versions of Windows 7 Home Premium and Windows 7 Professional will reduce in the range of 15-25%. Secondly, India pricing for these two versions is lower by 25-40% in comparison with developed markets like the US. Pricing for other retail versions of Windows 7 remains the same as Windows Vista.” Of course this does not include the additional requirements in terms of RAM/CPU, etc. It’s still a wait and watch for me.

After all the cribbing readers may wonder whether its worth the upgrade … maybe … maybe not. Wait for the next write-up before you leap.

Cheers! ! .

This article is merely an attempt to give the readers a bird’s-eye view of the reactions. This article is not intended to be either an endorsement or critique of any particular software or feature.

Templates in Excel

Windows Phone 7

Computer Interface

This month’s tech update is about the recently announced
Windows Phone 7. In December 2009 itself Microsoft had announced its intention
to release this version (and the latest offering from Microsoft’s stable) by
November 2010. Needless to say, this was one of the most eagerly awaited
announcement of this quarter. This write-up merely summarises some of the
stories on this new software.

Behind the scenes story :

For the uninitiated, Windows Phone 7 Series is Microsoft’s
reboot of its mobile platform previously named Windows Mobile. Even though
Windows Mobile had the first-mover advantage as the smart phone operating system
of choice, the platform last year suffered significant losses in its market
share. Apple’s iPhone and Google’s Android platform ate into Microsoft territory
by offering better user experience, a more robust platform and offering phone
apps.

The development team at Microsoft was clear that they would
be rethinking a lot of things and that there would be a sea change in the
approach to the development process itself. They revamped just about every
aspect of building the phone software, ranging from how they perceived customers
to how they would go about engineering for the product. An obvious course,
considering that they were attempting to regain their mobile groove by offering
a brand new user interface integrating applications and multimedia into ‘Hubs’
(i.e., software experiences organised into main categories or menus) as well as
a tidier platform for third-party developers to create and serve apps.

For development the plan was that, Windows Phone 7 Series
would employ the Silverlight and XNA programming environments. Silverlight would
serve as the coding toolkit for ‘rich Internet applications.’ (As Microsoft’s
alternative to Adobe Flash, this is not surprising, and potentially gives
Windows Phone 7 an edge over phones that don’t support Flash or Silverlight —
namely, the iPhone).
XNA, on the other hand, refers to a set of programming
tools that makes it easier for game designers to develop games for multiple
Microsoft platforms, including Windows XP, Xbox 360, Windows Vista and Windows
7.

Simply put, it meant that most mobile apps would be made with
Silverlight, while more graphics-intensive 3D games would most likely be
developed with XNA. The objective was that Microsoft would make the tools
friction-free for developers and to enable them to get in as easily as possible.

The user interface (UI) while similar to iPhone was intended
to be different from any other smart phone in the market. The phones would
support the same touch gestures seen on the iPhone: pinch or double tap to zoom,
and swipe in a certain direction to pan. For hardware, each Windows Phone 7
Series phone would include seven standard physical buttons for controlling
power, volume, screen, camera, back, start and search.

Comparison of some innovative and unique features :

With the lucrative mobile ecosystem getting crowded, the
existing players are battling hard to retain their market share (in some cases
to remain relevant). Up to 2009, developments were not as fast-paced as they are
today. In fact, while Microsoft took the lead over the other mobile/ phone
operating systems, iPhone and Google Android devices took a few years to refine
their user interface and features, giving them plenty of time to get ahead of
Microsoft’s ailing Windows Mobile OS.

Once they got in to the fray, the only way out for Microsoft
was to come up with a totally new user interface i.e., Windows Phone 7 OS, that
too without the luxury of time. Add to this, Microsoft had to build the system
from scratch and that it could ill-afford significant delays in release, the
likelihood that they would leave out several features that we now take for
granted on our smart phones was pretty high. Nonetheless, Microsoft brought in a
few interesting new elements to the table with Windows Phone 7, elements that
they thought would be preferred over the usability of an iPhone or an Android
phone.

All this has generated quite a bit of chatter on the pros and
cons of some of the features, the more popular topics of discussion are as
discussed below :

  • Microsoft’s new mobile OS doesn’t have copy/paste capabilities :

    Some of you may recall, the first, the second, and even the third iPhone did not initially have copy/paste functionality as well — but that was over a year ago (copy/paste for the iPhone arrived later as a software update). Interestingly, Android had this capability from day one. Besides this, both iPhone and Android have office apps that work a lot better than Microsoft’s. On Windows Phone 7, office work is impossible due to lack of Copy & Paste.

    Microsoft reportedly revealed during a Q & A session at its MIX10 conference that it believes that people don’t need copy-and-paste on their phones. Instead, the new OS offers new functionality the company believes people actually want. For example, the new Microsoft handsets will identify addresses and phone numbers, and you will reportedly be able to send this information to different applications such as the phone or your contacts manager.

    Notwithstanding, the exclusion of copy/paste in Windows Phone 7 doesn’t earn the new OS any gold stars for functionality.

  • Second on the list of missing Windows Phone 7 features is true multitasking :

    Windows Phone 7 does not allow third-party apps to run in the background, but pauses them until you return to the app. Apparently, multitasking was something that Android had from day one, and that was later introduced for the iPhone. This puts the OS in the same situation the iPhone was over a year ago, when only Apple’s apps could run in the background.

    It is interesting to note that one of the most highly criticised points against the iPhone is its inability to multitask, which prevents you from using more than one third-party application at a time. You can’t for example, use Blip.fm, while reading something on your Kindle or New York Times app. Apple’s solution to this problem was to create a push notification system where the content provider pushes information to your phone, instead of having applications on your phone you can call the content provider to get it. One reason critics are able to live with Apple’s strategy is that the iPhone can switch between applications fairly quickly, and most developers make sure their iPhone apps can open up from where you left off. So the downtime between closing and opening different apps, and finding the content you need, isn’t that significant on the iPhone.

    If Windows Phone 7 apps aren’t as equally fast and smart as their iPhone counterparts, Microsoft could end up being heavily criticised for its no multitasking-push notification system.

        The third debated feature oversight for Windows Phone 7 is the lack of Adobe Flash, Silverlight, or HTML5 support in the browser:
    Steve Jobs squashed any ideas of running Flash on an iPhone, so Android is the only one left in this round. It took Google and Adobe over a year to come up with Adobe Flash support for Android, but now the latest generation of Android phones has the feature. If Microsoft really wanted to have an edge over the iPhone and fight Android, it could have at least supported its own Flash-competing technology, Silverlight, on Windows Phone 7 devices. It is surprising considering that Silverlight was supposedly part of the original plan.

        Android and iPhone have equivalent of hubs:
    Android has notifications. iPhone has folders. While Windows Phone 7’s hubs are touted to be dysfunctional because it only notifies you with Microsoft messages. In effect, there are no notifications for third-party apps that you use, because those third-party apps cannot multitask. The apps are frozen, or tombstoned, and can’t notify you of anything. The moot question is “So what, then, is the point of the tiled hub interface if you can’t get notifications from the things you want (rather than what Microsoft wants)?”

    For the benefit of the readers, I have compiled a small list of what’s missing and what’s new.

    Features available in either iPhone or Android phone (but not available in Windows Phone 7):

        Copy and paste
        Multitasking
        Flash support
        HTML 5 support
        Unified inbox
        Threaded email
        Visual voice mail
        Video calling
        Universal search
        Limited removable storage
      Not enough applications (Microsoft 1000 plus as against over 3 lakh offered by iPhone and about a lakh by Android)

    Features available in Windows Phone 7 (but not available in either iPhone or Android phone):
        Limited removable storage
        Facebook integration
        Microsoft office support
        Widget tiles on home screen
        X-box live integration
        Panorama view of hub content
        Animated transitions
        Unlimited music download from Zune, unlimited video download from U-verse
        XNA game developer platform

    While it is too premature to say whether Microsoft was right or wrong, Windows Phone 7 has (may be rightly) received a lot of flak from reviewers for not having some features that many owners take for granted on their current smart phones. The next write-up will have more on the story as it unfolds (Windows Phone 7 is scheduled for launch on 7th November).

Technology news this week

In spite of the current downtrend, the investment and development in newer technology continues unabated. This month, I have pieced together some of the latest developments in the Information Technology industry to gain some insight about the recent developments.

Pirate Bay verdict and file-sharing

    The internet has always been a symbol for knowledge sharing. Among other things, countless users have been sharing much more than just knowledge i.e., personal information, music, movies, etc. Sharing music and movies first began hurting the entertainment industry at large, that’s when law enforcement agencies started cracking down on such sites. Pirate Bay was one such file-sharing sites and is the latest casualty of the cause. The verdict against the founders of The Pirate Bay is being hailed by many as a triumphant win against illegal file-sharing. The four men involved in the BitTorrent tracking site were found guilty of being accessories to violating copyright law. A Swedish Court sentenced each of them to a year in jail and a collective fine of $3.6 million. In the long run, though, the verdict may not be as significant as some suggest, when it comes to the battle against online file-sharing. Other players have opined this because, just like Napster, The Pirate Bay doesn’t actually host copyrighted files — it simply allows users to post links to material hosted on third-party servers. That’s why, incidentally, prosecutors ended up dropping the initial charge of ‘assisting copyright infringement’ and pursuing only a ‘assisting making available copyrighted material’ charge instead.” The Court said even if you are distributed, you are nevertheless encouraging your customers to violate copyright, and we’ll hold you accountable.

Stealthy Rootkit

    Countless websites have been rigged to deliver a powerful piece of malicious software that many security products may be unprepared to handle. The malicious software is a new variant of Mebroot, a program known as a ‘rootkit’ for the stealthy way it hides deep in the Windows operating system. An earlier version of Mebroot first appeared around December 2007 and used a well-known technique to stay hidden.

    Since Mebroot appeared, security vendors have refined their software to detect it. But the latest version uses much more sophisticated techniques to stay hidden. The new variant inserts program hooks into various functions of the kernel, or the operating system’s core code. Once it has taken hold, the malware then makes it appear that the Master Boot Record (MBR) hasn’t been tampered with. The rootkit infects the computer’s MBR, as a result it’s the first code a computer looks for when booting the operating system after the BIOS runs. If the MBR is under a hacker’s control, so is the entire computer and any data that’s on it or transmitted via the Internet. Then, each time the computer is booted, Mebroot injects itself into a Windows process in memory, such as svc.host. Since it’s in memory, it means that nothing is written to the hard disk, another evasive technique. Mebroot can then steal any information it likes and send it to a remote server via HTTP.

    The infection mechanism is known as a drive-by download. It occurs when a person visits a legitimate website that’s been hacked. Once on the site, an invisible iframe is loaded with an exploit framework that begins testing to see if the browser has a vulnerability. If so, Mebroot is delivered, and a user notices nothing.

Nokia’s new E75 phone

    Nokia has unveiled a new addition to its E-series range, the Nokia E75 with Nokia Messaging Service which provides e-mail solutions on Nokia devices with the pre-installed Nokia Messaging push e-mail service. According to a press release the E75 is the first device from Nokia that offers complete integration of e-mail and messaging services and provides an easy process for instant e-mail set-up and supports up to 16 e-mail accounts. Among other things, the Nokia E75 boasts of full desktop e-mail functionality along with both standard keypad and QWERTY keypad. It is also touted to be capable of supporting all features of Nokia Messaging Service — a number of third party e-mail solutions, namely, Gmail, Yahoo, Rediffmail, Sify, Indiatimes, Net4, Hotmail and In.com amongst others.

    Of course the Nokia E75 comes with the usual stuff i.e., an intelligent input feature with auto-completion and correction, a 3.2 megapixel camera, autofocus, flash, and comes integrated with a music player, media player, FM and internet radio, an integrated A-GPS and pre-loaded maps on the memory card and last but not the least, the device includes a built-in mobile VPN for intranet access. In case you are wondering . . . . it also comes with a price tag of approx 27K.

The most vulnerable browser

    Firefox fans take note : News reports circulating on the net suggest that Firefox is far more vulnerable than Opera, Safari, and Internet Explorer — and by a wide margin. In 2008, it had nearly four times as many vulnerabilities as each of those browsers. The rumours suggest that Firefox had 115 vulnerabilities reported in 2008, compared to 30 for Opera, 31 for Internet Explorer, and 32 for Safari. That doesn’t mean, though, that Internet Explorer is off the hook for security concerns. Far from it. ActiveX remains the browser plug-in or add-on with the most number of vulnerabilities.

New iPhone 3.0 Beta software

Apple has released a third beta build of the iPhone 3.0 software, taking developers one step closer to the final release in June. One of the most significant additions to the latest beta of the iPhone 3.0 software is the way individual apps will be able to notify users of updates or additional content. At the moment, individual apps flag users only in iTunes of new events, but with the 3.0 build, they will be able to do so right on the phone via badge, text or sound notifications. Spotlight (phone-wide search) will now let users save the last search they made, and can set restrictions for inside-application purchases and location data. An interesting fact about the third beta of the iPhone 3.0 software is that the Skype app no longer works on 3G. With previous builds, Skype allowed 3.0 beta software users to place calls via 3G, unlike the same app on the current 2.2 platform, which can make calls only over Wi-Fi. Apple seems to have fixed this ‘bug’, so no more wishful thinking for cheap VolP in the 3.0 final release. This third beta of the iPhone 3.0 software indicates the imminent arrival of a final 3.0 software in June, just like Apple promised. However, the question remains whether we will get some new iPhone hardware as well, especially as rumors intensified over the last weeks, detailing hardware components and features.

That’s all for this month, I hope to have more interesting developments next month.

Open document format

Computer Interface

History :


Documentation became a part of our culture ever since the
written word was invented. Documentation as we all know, is the simplest method
of allowing understanding and referencing. The methods of documentation have of
course evolved over the years along with the formats in which the data
was stored. So also, data formats have been around for as long as
computing. They reflected the varying capabilities and functions of different
computing systems and have evolved as these computing systems have evolved. In
the decades since, a wide range of formats (TXT, PDF, HTML, and DOC, just to
name a few) became popular because they meet specific user needs and tap into
new computing capabilities as they evolved. Then came the increasing
expectations and demands and technology met them by changing at a scorching
pace. Advances were being made in the field literally on a day-to-day basis, to
the extent that redundancy actually became an inbuilt attribute.

With such advances and the passage of time, the ones who
don’t match the pace, fade away in the dark corners of technological redundancy.
Many of us have experienced disappearance of older formats. For instance : Punch
cards were once commonplace, but you wouldn’t think of using them today.
WordStar was once what everyone used as their word processor; now, even filters
to read the format are less and less common. (More closer to heart Tally 4.5 to
Tally 9, Windows 3.11 to Vista and so on so forth). Luckily, WordStar format is
similar to ASCII and is thus mostly recoverable. But there are times when I
can’t read some important PowerPoint 4 files in today’s PowerPoint, only 7 years
later. This has come to a point that a file you created in a software less than
half a decade ago is no longer usable. This because the software/application no
longer accepts (supports) it.

Today




  •  When you buy a music CD you know it will fit in your CD player.



  •  When you buy canned food, you know it will work with your can opener.



  •  When you buy a toaster, you know it will work with the power plugs in your
    house.



  • When you visit a website, do you need to know what software that website runs
    to create the web page ?



  •  When you send an email, do you need to know what email client your friend
    has ?



Then why should it be different for your documents ? You
should be able to send your documents to your customers without knowing what
office software they run and be confident that it would work. Have you ever had
trouble opening a document that someone sent you ? Have you ever bought a copy
of an application software that you didn’t want because you have to read
documents that only work with that version of an application software ? Have you
ever wondered why there is so little choice in office software ?


  •  What if you could send a file to anyone and know that they can read it ?



  •  What if you could buy any product you want and know that you can still
    communicate with your customers ?



This is where the OpenDocument Format (ODF), an open,
XML1-based file format for office documents comes into the picture.
OpenDocuments include text documents, spreadsheets, drawings, presentations and
more. An OpenDocument is freely available for any software maker to use and
implement and does not favour any vendor over all the others. The creation of
XML-based document formats continues this evolution, and even within this
category a number of formats are being developed, including ODF2, Open XML3 and
UOF4. We should expect the creation of new formats in the future as the
technology evolves, and, as has always been the case, users should be able to
choose the formats that work best for them.

Recent developments :

One objective of open formats like OpenDocument is to
guarantee long-term access to data without legal or technical barriers, and some
governments have come to view open formats as a public policy issue.
OpenDocument is intended to be an alternative to proprietary formats,
including the commonly used DOC, XLS, and PPT formats used by
Microsoft Office
and other applications. Up until Feb. 15th 2008, these
latter formats did not have documentation available for download, and were only
obtainable by writing directly to Microsoft Corporation and signing a
restrictive non-disclosure agreement. As of Feb. 15th 2008, Microsoft
offers documents for download claiming to accurately specify the aforementioned
document formats (although this claim hasn’t been independently verified yet).
Microsoft is supporting the creation of a plug-in for Office to allow it to use
OpenDocument. The OpenDocument Foundation, Inc. has created a similar
plug-in that will allow continued use of Microsoft Office.

The OpenDocument format (ODF, ISO/IEC 26300, full name :
OASIS Open Document Format for Office Applications) is a free and open file
format for electronic office documents, such as spreadsheets, charts,
presentations and word processing documents. While the specifications were
originally developed by Sun, the standard was developed by the Open Office XML
technical committee of the Organisation for the Advancement of Structured
Information Standards (OASIS) consortium and based on the XML format originally
created and implemented by the OpenOffice.org office suite (see OpenOffice.org
XML).

Case for the Governments to adopt open document formats:

Case for the Governments to adopt open document formats:
In all humility, with whatever limited knowledge I have about technology and of the trends that are taking shape, I am now getting paranoid about the whole e filing process and the initiatives adopted by the Government.

Although the process was in bits and pieces (fits and start is more like it), the process adopted by the Government has been rather haphazard. Instead of learning from each other’s experience, every department has tried to do their “own thing”.
 
For instance : the e-filing process was kicked off by the Government in 2004. At the time text files were in vogue (still is with the etds process), then came the PDF- (MCA 21 and ITRs for Corp orates in AY 06-07). The last year it was Excel and XML and the story will go on.

This year the Government is pushing for efiling not only for Income Tax, but also for Service Tax, VAT . and other laws. Even here, there is no uniformity. The Income-tax Department is using XML format, VAT authorities seem to be following suit, but the Excise & Service Tax authorities are still depending on an HTML format (EASIEST), the MCA relies on the PDF format.

The concern stems from the fact that governments don’t create office documents, so that they can be tossed in the shredder. They often have to be accessible decades (or centuries) later, and many of them – have to be accessible to any citizen, regardless of what equipment they use or will use. Having said this, the question that needs to be answered is has the Government given a serious thought to the fact that although, PDF is a very useful display format, it has a different purpose – while it’s great at preserving formatting, it doesn’t let you edit the data meaningfully. HTML is great for web pages, or short, but it’s just not capable enough for data mining and data retrieval. Both HTML and PDF will continue to be used, but they cannot be used as a complete replacement.

The writing on the wall suggests that the taxpayer, along with dealing with the many intricacies in law, will now be saddled with the additional burden of dealing with multiple data formats. Nobody knows what will happen 5-7 years down the line when presumably better formats are in vogue. Unless the Government realises the pitfalls and makes conscientious efforts in developing/adopting standardised/ open standard software, we will all have to save our old software packages and the files generated thru them, on floppies/CDs/DVD, etc. and pray that they still work when the sleeping giant wakes up.

Smart phones — the next biggest thing ever to happen

Computer Interface

While 2009 was a year of slow down (in many respects), the
year 2010 has started off with a slew of launches and teasers. When I started my
research for something interesting, I was overwhelmed by the results of my
search. I was literally buried under the information overload. As a result, I
just couldn’t settle on the theme for this column. After much dithering and
scrapping several ideas, I finally settled on this topic.


Trends in the past:

Initially I thought that I was generalizing it a bit too
much, but after taking a relook at the trends, it seemed evident that the most
significant developments in the area of personal computing, happened during the
nineties. Similarly, the use and dependence on the Internet grew considerably in
the millennium years. It appears that the industry now believes the mobile phone
to be the next “biggest thing ever to happen”.


Smartphones:

Smartphones are a (near) perfect example of a dichotomy.
While there are some phones which tickle the fancy of a consumer (i.e. iPhone
types) and then there are others which would be the choice of an enterprise or
would be the proud possession of a yupee (BlackBerry). Needless to say, the
iPhone is not too popular with the enterprise and the
BlackBerry is not popular with the consumer.

The convergence

Going forward, one is likely to see many attempts to make the
mobile phone a one-point access for both basic and social activities. For
instance, one of the focus points this year, appears to be integrating all
social media under one platform and simplifying the user interface. Connectivity
will play a key role in shaping the future of the mobile industry. Real-time
information and analytics, coupled with strong networks, will lead to the
creation of utility-based services for consumers. Developments like mHealth and
mEducation, will be coined as the key growth areas of the future.


Rise of the application market

‘Context’ will soon become an important addition to basic
search-based functions (by using analytics). Existing features of the device –
such as GPS, voice-based telephony and in-built cameras – will be used to bring
in context. The trend indicates that the applications market will rise
considerably to bring about the biggest development for the mobile industry. The
focus will be on creating ubiquitous services. This will be instrumental in
evolving the app economy into a successful business model. The growth in the app
economy will power the vision for the mobile ecosystem (which among others
includes telecom operators, content providers and original equipment
manufacturers).

A much tighter integration between application developers and
service providers will ensure greater consumer experience and the next one and a
half years promise to be an interesting phase for this industry.

Armed with this background, it is interesting to see who is
doing what?

Microsoft’s strategy

Microsoft, whose phone operating systems have not been a big
hit with either of these groups, is launching a mobile operating system that
could (ahem!) appeal to both the consumer as well as the enterprise.

Over the past two decades, Microsoft has seldom rewritten a
piece of software from scratch and while each Windows version made substantial
changes, the core has in most cases remained the same. With Windows Phone 7,
Microsoft is apparently making a clean break from the past. It is a software
that supposedly has been written from scratch. The Windows Phone 7 is aiming to
provide a user experience that is completely different. It is as minimalist as
it gets. On the screen are several hubs around common themes: people, pictures,
music, videos, Microsoft Office, etc. The Phone 7 also uses Bing maps, which
would probably provide the same experience as Google Maps. Windows phone 7 is
supposed to work seamlessly with your PC software such as Outlook, OneNote and
SharePoint.

It is interesting to note that Apple wanted to merge the MP3
player and the phone, hence the iPhone. (Microsoft seems to be emulating the
idea—Zune Player and Windows Phone 7). It seems that Microsoft intends to bring
the phone close to the PC hence their mobile operating system has been designed
to work seamlessly with the PC.

The phone will have just three hardware buttons: home, search
and back (and you thought it would be CTRL + ALT + DEL—they have written the
software from scratch remember!!!!). The phone resembles a Zune Player
(Microsoft’s answer to Ipod).

One of the greatest strengths of Windows phone 7 is the way
its phone works with a PC. But that could be Google’s strength too, if the PC
world starts shifting to the cloud (Web-based computing world. Microsoft is
expected to reveal more about applications for the phone in the upcoming MIX
conference at Las Vegas. The next two years will see an interesting battle for
the enterprise smartphone.

Google’s strategy

Just like Microsoft, rival Google, too feels that the ‘mobile
phone’ is at the heart of the internet giant’s future. According to Google CEO,
Eric Schmidt, internet mobile devices will overtake PCs by 2013 and ‘Mobile
First’ will be the key focus for Google. At the World Mobile Conference in
Barcelona, he outlined how the web giant’s top programmers were now
concentrating on mobile phones. By taking search to mobiles, Google wants to
create an open platform that brings together location-based search with voice
and pictures.

To illustrate, let’s say you are in Barcelona and you are
looking for Indian food. The search platform would recognize that you are in
Barcelona and throw up the most relevant search results — Indian restaurants in
the city. The search recognizes your location and while you ask for options for
food, identifies your speech and sends you the desired results. This technology
goes further. For instance, if the Indian restaurant’s menu has some parts in
the Devanagri script and a non-Hindi speaking person does not understand it, all
the user needs to do is, focus a phone camera onto the script and within
seconds, the search will recognise the characters and send out intelligent data
on the meaning of the words with corresponding pictures for better clarity.

Schmidt also said that three unique areas had now converged on the mobile device: Computing power, interconnectivity and the cloud. To quote “The phone is where these three all interconnect and you need to get these three waves right if you want to win.” Using the examples of Spotify, Facebook and, of course, Google, he highlighted how the cloud concept is being used in both fixed and mobile communications. He also mentioned that recent trends indicate that in Indonesia and South Africa, more and more users are preferring searches via mobile phones than PCs.

RIM’s strategy

Not too far behind Microsoft and Google, IBM in collaboration with RIM said that they will bundle the Lotus collaboration applications on BlackBerry. While this would seem an innocuous announcement, the move assumes added significance when you look at a series of related developments in the smartphone world.

A smartphone is generally looked upon as a consumer device, thanks to the large number of applications developed for the consumer, particularly on the iPhone. Typically, a smartphone was used mostly for voice in the enterprise, and recently for email too. Though email is the killer smartphone application in the enterprise, two other sets of applications have emerged now: collaboration and document viewing.

The IBM-RIM partnership announced at the Lotusphere conference in Orlando — that Lotus Connections will be loaded on to BlackBerry devices. (There would be no fee as the applications are preloaded). BlackBerry is already integrated with Lotus Sametime, a messaging and calendar application that also tells you who else is online. Users will now be able to collaborate and view documents using the BlackBerry.

Enterprise software firms find that for many applications that they sell, companies ask for a mobile solution as well. This is on account of the fact that workers are on a routine basis spending more time on the road. They need to access documents as well as collaborate. Most users of the smartphone in the enterprise now use it for email, contact management and calendar. Viewing documents comes next (the smartphone will be used only for viewing and not creation), followed by collaboration applications.

Collaboration comes last, not because employees do not use them, but because they are not being made available. You need to pay, for example, for the BlackBerry applications. This is precisely why smartphones with bundled Lotus Connections will make a difference.

Observers think that conferencing is the next hot smartphone application in the enterprise. Till the advent of 3G, it was difficult to both talk as well as connect to the Internet at the same time on the smartphone. Even with 3G, Web conferencing still does not work perfectly in many places including the US, with many users complaining about call drops and delays. These technical issues are likely to be solved in the future, and meetings over smartphones would then become commonplace in offices.

This means several companies will now be working on a mobile solution for smartphones as well. That would also mean a new wave of applications for the smartphone. One major hurdle that needs to be crossed here is that each handset is different, it takes considerable time to develop applications for one handset, and then this has to be developed all over again for another one. While we will have to wait till the end of the year to see the outcome of this trend, the three traditional rivals — Apple, Google and Microsoft are in for a tough battle for market share. No matter who wins, one thing is for sure that the smartphone landscape could change dramatically by next year.

Cheers!

Kal, Aaj aur Kal — Part I

Computer Interface

Kal, Aaj aur Kal is the name of a movie released during the
70s. Befitting its name the cast consisted of three generations of the Kapoor
clan viz., Privthviraj Kapoor, Raj Kapoor and Randhir Kapoor — represent
Kal (the past), Aaj (the present) and Kal (the future). In the movie each
generation felt strongly about the genre of culture in which they were born and
brought up and couldn’t comprehend how the others could survive without it or
why the others had no respect for it. Taking a leaf from this theme, I have
pieced together past trends which changed our present and are likely to shape
our future.

Today, life without a cell phone, a laptop, or an Internet
connection seems unthinkable. Technology has infiltrated the daily life in so
many ways that it’s hard to remember entire generations found ways to reach
others, stay up-to-date, and do their jobs without the technology innovations we
take for granted. This write-up is about innovations that may seem standard now,
but whose creation changed the way business is conducted, directly affected
quality of life, broke new ground, and more. The list is not organised in any
particular order, however some of the biggest contributors to the present
technology are listed in the paras below.

The first among the trendsetters is Graphical User Interface
(‘GUI’). The first graphical user interface was invented by Douglas Englebert in
1968. But thanks to companies like Apple, who popularised the same, GUI design
advanced significantly in the late ’70s and early ’80s. Because of these
pioneers, we can take it for granted that we interact with our computer using a
mouse and have easy-to-understand icons and other graphical controls instead of
having to remember a bunch of computer commands.

Of course without the Personal Computer — PC/ laptop
computers, our progress would have been stunted. 1981 was a big year for
computers: IBM launched the 5150 model (which it called a ‘personal computer’)
and the Osborne 1 became the first portable computer. Weighing in at 24 pounds,
it challenges our current notion of laptop. Not to forget that it was MS DOS,
yes, a Microsoft product which opened up new possibilities.

Internet/broadband/WWW is an equal contributor. Our slavery
to Google, our addiction to Twitter. Not to mention our penchant to keep
up-to-date on any given news topic, our ability to send and receive far too many
e-mails. The Internet enabled so many other phenomena that it’s startling to
realise the Internet as we know it only arrived in the ’90s. But it didn’t take
long to change our lives forever.

Online shopping/ecommerce/auctions are also responsible for
turning the fortunes of many. Where would we be without all the Amazons, eBays
and other online stores ? Thanks to the Internet being
opened up to commercial use, the ability for companies to capitalise on
electronic transactions took off. As did our hunger for a more peaceful shopping
experience. Today, ecommerce is a given, we book our tickets for
travel or for movies online. It’s obvious
why those wonder years were called the ‘Roaring 90s’.

Mobile phones, take a look at your tiny little cell phone and
be thankful. The first mobile phones, which Motorola unleashed on the market in
1983, were confined to the car (until a few years later when they became more
mobile) and were the size of a briefcase, in fact my first handset would very
easily measure up to a remote control. I am absolutely speechless when people
say that they would be lost without their mobile phones. Come to think of just a
decade ago, most people survived with fixed telephones (apro MTNL). I still
prefer the old school — don’t call me, I’ll call you. In fact, the thought that
soon I will be forced to carry a Blackberry device is unsettling . . . .

Social networking via Internet. This is one trend that I
haven’t adjusted to as yet. Internet-based social networks really are very new.
SixDegrees.com (1997) is one of the earliest social network site. They say that
it wasn’t until MySpace, which launched in 2003, that social networks began to
appeal to the masses. Now, of course, there’s Facebook, which gives you endless
opportunities to have worlds collide, and Twitter, which empowers you to become
your own paparazzi by dropping life tidbits, wisdom, and your comings and goings
to your anxious followers. If you haven’t done it already do check out
SecondLife — (for some it is a second life . . . . literally).

In the next part I hope to cover some innovations which I think will shape
our future . . . .

levitra

Headers and Footers

Computer Interface

The aim of this article is to help the readers work
effectively by using automated tools built into the application software. This
article would be useful for beginners as well as intermediate level users.


Most of us have a tendency to underutilise the resources
built into the older versions also. Come to think of it, most of us use the PC
more like a typewriter thus leaving the computing power utterly untapped. I have
commented on this far too many times in this column and it is for this reason I
chose to write an article on a difficult aspect like headers and footers.

I’m often surprised to find that certain Word users are
completely unaware of the headers and footers feature in Word. In part, this is
because Word’s designers hid it. Word has a lot of tricks up its sleeve, and the
Insert menu is home to most of them. Some of the useful things that Word has to
offer can be found on the Insert menu: page numbers, date and time, AutoText,
fields, symbols, comments, footnotes and endnotes, cross-references, indexes and
tables, text boxes, pictures, frames, diagrams
. However, Header and Footer
is hidden on the View menu. Users who come straight from a typewriter to Word
don’t think of using headers and footers, because they’re used to manually
typing text at the beginning or end of a page. It may not occur to them that
there is a better way. But the header/footer feature in Word is one of its most
useful tools, one that users need to learn how to take advantage of.

Headers and footers in a document :

Headers and footers are areas in the top and bottom margins
(margin : The blank space outside the printing area on a page.) of each page in
a document.

You can insert text or graphics in headers and footers — for
example, page numbers, the date, a company logo, the document’s title or file
name, or the author’s name — that are printed at the top or bottom of each page
in a document.

The question one would ask is when should I use a header
or footer 
?

Headers and footers are used in the following instances :

l
Repeated text


Whenever you need to repeat text or graphics on a page.
Usually such text will be a ‘running head’ or ‘running foot’ at the top or
bottom of the page, but header and footer content is not confined to the top and
bottom; it can appear anywhere on the page — in the same place on every page
(but some content can be dynamic; for example, a page number can change on every
page).

l
Text that stays put


Whenever you need to put text at the beginning or at the end
of a document that will stay put and be out of the way.



Repeated text :

One of the most common elements of a header or footer is a
page number. You may already have figured out how to number pages using the
Insert | Page Numbers command. For simple documents, this feature actually
offers a great deal of power and flexibility : you can omit the page number on
the first page, you can choose where you want it to appear (top or bottom, left,
centre, or right — even inside or outside for facing pages), and you can choose
from a variety of number formats. You can choose to include a chapter number (see
picture 2
), and you can choose a starting page number. With care, you can
even use this feature in documents with more than one section. If you know what
you’re doing, you can edit the page field that Word inserts for you, to add text
such as “Page” before the number.

Usually, though, in anything but the simplest type of
document, page numbers inserted this way become difficult to use (especially if
you want to combine them with other text). Moreover, if you decide not to use
them, there is no way to “turn them off” from the Page Numbers dialog, and if
you remove them incompletely (failing to delete the frame the page number is
in), you can have puzzling problems down the line (see “Text at the top of the
page is unaccountably indented”). In any situation where you need more than a
simple page number (even something as simple as “Page 1 of n”), you should use a
header or footer (see picture 3). This includes book and chapter titles
(or the name of the author) in books, section titles in reports, logos and
letterheads in letters, watermarks, and so on.

Text that stays put :

The most common example of text that belongs in a header is a
letterhead. You want to put that at the beginning of a letter, and you want it
to be out of the way of other text you will add, so that it doesn’t get pushed
down the page. Usually you don’t want it repeated on every page, so you use a
special kind of header for it. Another example is the text you want to stay at
the end of a document, no matter how much text you add to the document. You can
put that in a footer. Again, you don’t want it repeated on every page, but there
is a way to achieve that too, as will be detailed below.

Creating a header or footer :

As mentioned above, even if you think your document doesn’t
yet have a header or footer, you have to use View | Header and Footer to create
one. This may seem illogical to you, but in fact, the header and footer already
exist; they’re just empty until you put something in them.

Unlike Word Perfect, where the header and footer are at the top and bottom margins, and you have to add space between them and the document text, Word reserves space for the header and footer outside the top and bottom margins (as shown in picture 1) They have their own distinct margins, which you set from the Margins tab of File I Page Setup in Word 2000 and earlier and on the Layout tab of Word 2002 and above. In order to insert headers and footers click on Header and Footer on the View menu.

Once you have created a header or footer, you can open it for editing in Print Layout view by double-clicking on the existing content. To open it the first time, however (or to access it from Normal view), you must select View I Header and Footer. When you do this, Word opens the header pane and displays the Header and Footer toolbar (see picture 4). This toolbar offers a number of useful buttons that will be discussed throughout this article. The first one you should find is the Switch Between Header and Footer button. If you are trying to create a footer rather than a header, this is what you need to get to the footer pane.

(The concluding portion of this write up will be published in the next issue of the BCAJ)

Bing — the new kid on the block

Headers and Footers — Part 2

The OS war — Episode-II

Computer Interface

Circa Oct. 2009, Amazon was booking orders for copies of
Windows 7. What it didn’t know at the time (or may be it did, but didn’t
publicise it) was that the bookings were going to be the biggest ever, and would
gross even more than the latest book of Harry Potter.

Windows 7 is now more than a month old (since it hit the
stores on 22nd Oct). According to the grapevine, users are not entirely unhappy.
Early adopters report they’re mostly happy — and that is true for Vista users
even more than XP users, and rightly so. After all Windows 7 is all that its
predecessor, Windows Vista, was expected to be.

For instance :

  • Unlike Vista, Windows 7 hogs lesser resources, making it
    a far better performer, capable of running on lesser powered net-books that
    currently have to use leaner Linux or Windows XP operating systems;

  • PC users are enjoying almost the same kind of performance
    and services that owners of Macintosh and Linux computers have long taken for
    granted.

Of course there are users who ask — why they had to wait so
long — and then have to pay for it. A few of Microsoft’s harsher critics even
argue that many of the improvements that wound up in Windows 7 could have been
released as a free ‘service-pack’ a year or so ago, that is if Microsoft
wanted to salvage Vista
. After all it wouldn’t be the first time ! ! ! ! !

Still, there are others who don’t want to upgrade to Windows
7 because a good majority of the users are happy with Windows XP. There are some
who cite cost as a deterrent, a whole bunch of users are waiting for the Windows
7 service pack (already ! ! ! ! ! It’s barely one month old).

Windows XP was popular because it gave to all its users the
right to change all sorts of things (and accidentally leave back-doors open for
mischief-makers). Vista’s ‘user account control’ (UAC) technology clamped
down firmly on the user’s ability to change settings, download software or even
run installed programs. To gain the right to do so, users had to get
authorisation from an administrator. Even then, they were bombarded by UAC
interruptions, asking for all sorts of permissions and validations to continue
with whatever they were trying legitimately to do. It was enough to drive most
people insane and deem Vista’s iron-clad security feature an absolute no-no.
Just as bad, the locked-down nature of Vista made it run as slow as a sloth,
soaking up lot more computing power than XP, to perform similar tasks. The extra
security also led to instability and compatibility problems. In short, Vista
came short on a lot of expectations (that people took for granted with XP). As a
consequence, four out of five XP users (out of an estimated 800m PC owners
around the world) refused to upgrade to Vista. Incidentally, the vast majority
of those who use Vista today acquired it by default when they bought a new
computer.

But there are good reasons for XP users to upgrade. Greatly
improved security is one. Apart from being snappier and more modest in its
needs, Windows 7 is a good deal friendlier than and almost as secure as Vista.
The lack of technical support is another good reason. Microsoft ceased providing
mainstream support for Windows XP last April (though it will continue to offer
bug fixes and security patches for the venerable operating system until 2014).

Without getting into the nitty-gritty of the installation
process and the hardware requirements, let’s get on with what Windows 7 has to
offer :

Better User Access Control and security features :

Ideally the UAC was supposed to keep the users safe from
malware, but instead its constant prompts and validations prevented users from
accessing those controls. Microsoft has apparently learnt from this experience,
Windows 7’s UAC has improvised the security feature by giving the user the
option to choose the level of intrusiveness (see picture 1).

While Vista users had no choice in using the UAC (except, of
course, turning it off ! ! ! ! ! — see pic. 1), Windows 7 allows the user to
choose from two intermediate notification levels between ‘Always notify’ and
‘Never notify’.

The control is in the form of a slider containing four
security levels. As before, you can accept the full-blown UAC or opt to disable
it. Not only can you tell UAC to notify you only when software changes Windows
7’s settings, not when you’re tweaking them yourself and you can also instruct
UAC not to perform the abrupt screen-dimming effect that Vista’s version uses to
grab your attention. Naturally, the convenience comes with a caveat. The slider
that users use to reduce its severity, advises you not to do so if you routinely
install new software or visit unfamiliar sites, and it warns that disabling the
dimming effect is ‘Not recommended.’

Other than salvaging UAC, relatively few significant changes
have been made to Windows 7’s security system. One meaningful improvement :
BitLocker (courtesy of a feature called BitLocker to Go) lets you encrypt USB
drives and hard disks. However the drive-encryption tool comes only with Windows
7 Ultimate and the corporate-oriented Windows 7 Enterprise. It’s one of the few
good reasons to prefer Win 7 Ultimate to Home Premium or Professional.

Internet Explorer 8, Windows 7’s default browser, includes
many security-related enhancements, including a new SmartScreen Filter (which
blocks dangerous websites) and InPrivate Browsing (which permits you to use IE
without leaving traces of where you’ve been or what you’ve done). Nonetheless,
IE 8 is equally at home in XP and Vista (and it’s free) so it doesn’t constitute
a reason to upgrade to Windows 7.

Applications fewer, better :

It’s rather common for an OS to come with paraphernalia
applications bundled along with the main OS. However, Windows 7 has taken a
different approach (for that matter Google’s Chrome OS has gone even further).
Rather than bloating it up with new applications, Microsoft eliminated three
(ahem ! ! !) non-essential programs : Windows Mail (née Outlook Express),
Windows Movie Maker (which premiered in Windows Me), and Windows Photo Gallery.
Users who don’t want to give them up can find all three at live.windows.com as
free Windows Live Essentials downloads. They may even come with your new PC,
courtesy of deals Microsoft is striking with PC manufacturers. Ironic as it
may sound, first they say that they are non-essentials and then they add it to
the list of Windows Live Essentials, they even strike deals with PC
manufacturers — strange folks these software companies or is there something
else going on in the background ?


Still present — and nicely spruced up — are the operating system’s two applications for audio and video, i.e., Windows Media Player and Windows Media Center.

Windows Media Player 12 has a revised interface that divides operations into

  •  a Library view for media management; and

  •  a Now Playing view for listening and watching stuff.

There is a lot more functionality that’s been built in Media Player 12. Minimise the player into the Taskbar, and you get mini-player controls and a Jump List, both of which let you control background music without having to leave the app you’re in. Microsoft has also added support for several media types (currently not supported by Media Player 11) including AAC audio and H.264 video — the formats it needs to play unprotected music and movies from Apple’s iTunes Store.

Media Center, however, which comes only with the pricier versions of Windows 7, is most useful if you have a PC configured with a TV tuner card and you use your computer to record TV shows à la TiVo. Among its enhancements are a better program guide and support for more tuners.

(to be continued)

Internet Browsers — Part II