The FumpLike funny music, novelty songs and parodies? Check out The FuMP, The Funny Music Project. They post a new, hilarious song every few days and to catch up you can check out their archives.

I heard of this site from the CNET Buzz Out Loud podcast who mentioned the song Dead Nintendo by Possible Oscar on yesterdays podcast. I think my current favorite would have to be She’s Underage by Seamonkey.

If you want to get their picks as they come out you can even use their rss feed with the software of your choice. It’s like a little taste of Dr. Demento every few days.


parody, music, filk, humor, funny, hilarious, silly, strange, songs, free music

Oops! The rare Enzo Ferrari just got a little more rare. In this video actor Eddie Griffin has some technical difficulties making a turn and powders a million dollar car into a Jersey barrier.

If you look closely you can see that the wheels are actually turned, but for some reason just not gripping. I can only guess there may have been something on the track. Poor Enzo.

enzo, ferrari, supercar, car, cars, crash, accident, expensive

The ls command is the main way to browse directory contents on UNIX and Linux. While it can be used with no options there are several options which will customize the output.

Using Simple ls Command Options

There will come a time when a user will want to know the last file touched, the last file changed or maybe the largest or smallest file within a directory. This type of search can be performed with the ls command. Previously the ls command was used to display directories and files within directories, but by using some of the ls command options and piping the output of ls to the head command to limit the number of displayed lines we can find some of these more specific results.

The following home directory is used for the next few examples. Using the –A option makes ls show files beginning with . but eliminates the . and .. files from the display.

$ ls -Al
total 44
-rw------- 1 tclark tclark 7773 Feb 2 17:11 .bash_history
-rw-r--r-- 1 tclark tclark 24 Aug 18 11:23 .bash_logout
-rw-r--r-- 1 tclark tclark 191 Aug 18 11:23 .bash_profile
-rw-r--r-- 1 tclark tclark 124 Aug 18 11:23 .bashrc
-rw-r--r-- 1 tclark tclark 237 May 22 2003 .emacs
-rw-rw-r-- 1 tclark tclark 0 Feb 3 09:00 example1.fil
-rw-rw-r-- 1 tclark tclark 0 Jan 13 21:13 example2.xxx
drwxrwxr-x 2 tclark authors 4096 Jan 27 10:17 examples
-rw-r--r-- 1 tclark tclark 120 Aug 24 06:44 .gtkrc
drwxr-xr-x 3 tclark tclark 4096 Aug 12 2002 .kde
-rw-r--r-- 1 tclark authors 0 Jan 27 00:22 umask_example.fil
-rw------- 1 tclark tclark 876 Jan 17 17:33 .viminfo
-rw-r--r-- 1 tclark tclark 220 Nov 27 2002 .zshrc

Finding the File Last Touched (Modified) in a Directory

The –t option is used to sort the output of ls by the time the file was modified. Then, the first two lines can be listed by piping the ls command to the head command.

$ ls -Alt|head -2
total 44
-rw-rw-r-- 1 tclark tclark 0 Feb 3 09:00 example1.fil

Using the pipe (|) character in this way tells Linux to take the output of the command preceding the pipe and use it as input for the second command. In this case, the output of ls –Alt is taken and passed to the head -2 command which treats the input just like it would a text file. This type of piping is a common way to combine commands to do complex tasks in Linux.
Finding the File with the Last Attribute Change

The –c option changes ls to display the last time there was an attribute change of a file such as a permission, ownership or name change.

$ ls -Alct|head -2
total 44
-rw-rw-r-- 1 tclark tclark 0 Feb 3 09:07 example1.fil

Again we are using the head command to only see the first two rows of the output. While the columns for this form of the ls command appear identical the date and time in the output now reflect the last attribute change. Any chmod, chown, chgrp or mv operation will cause the attribute timestamp to be updated.

Finding the File Last Accessed in a Directory

Beyond file and attribute modifications we can also look at when files were last accessed. Using the –u option will give the time the file was last used or accessed.

$ ls -Alu|head -2
total 44
-rw------- 1 tclark tclark 7773 Feb 3 08:56 .bash_history

Any of these ls commands could be used without the |head -2 portion to list information on all files in the current directory.

Finding the Largest Files in a Directory

The –S option displays files by their size, in descending order. Using this option and the head command this time to see the first four lines of output we can see the largest files in our directory.

$ ls -AlS|head -4
total 44
-rw------- 1 tclark tclark 7773 Feb 2 17:11 .bash_history
drwxrwxr-x 2 tclark authors 4096 Jan 27 10:17 examples
drwxr-xr-x 3 tclark tclark 4096 Aug 12 2002 .kde

Finding the Smallest Files in a Directory

Adding the –r option reverses the display, sorting sizes in ascending order.

$ ls -AlSr|head -4
total 44
-rw-r--r-- 1 tclark authors 0 Jan 27 00:22 umask_example.fil
-rw-rw-r-- 1 tclark tclark 0 Jan 13 21:13 example2.xxx
-rw-rw-r-- 1 tclark tclark 0 Feb 3 09:00 example1.fil

The –r option can also be used with the other options discussed in this section, for example to find the file which has not been modified or accessed for the longest time.

Use of the ls command options is acceptable when the user is just interested in files in the current working directory, but when we want to search over a broader structure we will use the find command.

Easy Linux CommandsFor more tips like this check out my book Easy Linux Commands, only $19.95 from Rampant TechPress.

Buy it now!


unix, linux, system administration, sysadmin

After my first response to Donald Burleson’s article The web is becoming a dictatorship of idiots Donald responded. Here is his response followed by my response to him.

From: Donald Burleson

Here are my guidelines for finding credible information on the web, and advice on how-to weed-out crap, sound advice.

In my opinion (and in my own interest) I think everyone should be able to publish anything at anytime.

Me to. I’m all for free speech, but it’s the search engines problem that they cannot distinguish between good and bad information. I don’t like the “clutter” it’s causing for the search engines. It ruins my ability to find credible sources of technical information, and I have to wade through pages of total crap from anonymous “experts”. For example, scumbags are stealing credible people’s content and re-publishing it in their own names, with free abandon. Look at what has been stolen from Dr. Hall.

So the system can (and will eventually) balance itself.

I disagree, not until “anon” publications and copied crap is unindexed from the search engines.

If I’m using Google to find technical information I give zero credibility to anonymous sources, and it would be great to have a “credible” way to search the web for people, so they can find stuff from folks like us, who publish our credentials.

We’re in the information age and the flood gates have opened!

Flood is the right word. Some of the Oracle “experts” who publish today would never have been able to publish in-print, and for very good reason. There are many self-proclaimed “experts” all over the web, people without appropriate education or background who would never be published in traditional media. And just like “Essjay” on Wikipedia, many of them either fabricate of exaggerate their credentials. They carefully hide their credential (resume or CV), so nobody knows the truth.

I think it’s up to culture to catch up to technology

I disagree, it’s not “culture”, it’s a simple credibility issue. And what about Wikipedia? Any 9th-grade dropout crackhead can over-write the work of a Rhodes scholar. That’s not a culture issue, it’s about credibility.

It’s a dictatorship of idiots. One bossy Wikipedia editor tossed-about his credentials (“a tenured professor of religion at a private university” with “a PhD. in theology and a degree in canon law.”), when in reality he is a college dropout, a liar and a giant loser.

Wikipedia is the enemy of anyone who wants to find credible data on the web, and they are actively seeking to pollute the web with anon garbage. Read this for details.

It’s the balance between free speech and credibility. Just the raw link-to counts are deceiving. I hear that the #1 Oracle blogger got there only because he wrote a hugely successful blog template, totally unrelated to his Oracle content quality.

The solution is simple. Sooner or later, someone will come-up with a “verified credentials” service where netizens pay a free and an independent body verifies their college degrees, published research, job experience and other qualifications.

Until then, netizens must suffer the dictatorship of idiots, never sure if what they are reading is by someone who is qualified to pontificate on the subject. I do Oracle forensics, and the courts have very simple rules to determine of someone is qualified to testify as an expert, and there is no reason that these criteria cannot be applied on the web, assigning high rank to the qualified and obscurity to the dolts. Until then we must suffer weeding through page-after-page of questionable publications in our search results.

My response

it’s the search engines problem that they cannot distinguish between good and bad information. I don’t like the “clutter” it’s causing for the search engines.

There’s no doubt that web indexing and searching is an imperfect science but identifying the quality of resources is beyond its scope. Search engines like Google, Yahoo and MSN should be considered tools to help find a site with information matching a term or pattern, not necessarily a good site.

scumbags are stealing credible people’s content and re-publishing it in their own names

Plagiarism is not a new problem and, as many have found, search engines can be instrumental in identifying plagiarism. The site Copyscape which you pointed out to me makes great use of Google’s API to do exactly that.

> So the system can (and will eventually) balance itself.

I disagree, not until “anon” publications and copied crap is unindexed from the search engines.

If I’m using Google to find technical information I give zero credibility to anonymous sources, and it would be great to have a “credible” way to search the web for people, so they can find stuff from folks like us, who publish our credentials.

And you should not give credibility to a source just because Google finds it. That’s not Google’s job. Google’s job is to find pages (every page if possible) that match the terms you’re entering. Popular sites are weighted to show up earlier in the results, but yes, only because they are popular.

Wikipedia is the enemy of anyone who wants to find credible data on the web, and they are actively seeking to pollute the web with anon garbage.

I think it’s unlikely that Wikipedia is actively trying to pollute the web. Wikipedia is fundamentally flawed for many of the reasons you mention but it remains accurate on many topics. There is no disguising of what it is and it has been largely condemned as an academic resource, but when I need a quick ‘starting point’ reference or the answer to some pop-culture trivia it’s still the place I go.

It’s the balance between free speech and credibility. Just the raw link-to counts are deceiving. I hear that the #1 Oracle blogger got there only because he wrote a hugely successful blog template, totally unrelated to his Oracle content quality.

Actually, I think you’ll find that the #1 Oracle blog you mention is actually the non-topical personal blog of an Oracle administrator. The point that he composed an attractive and well written WordPress theme is a testament to the quality of his work.

The solution is simple. Sooner or later, someone will come-up with a “verified credentials” service where netizens pay a free and an independent body verifies their college degrees, published research, job experience and other qualifications.

Verified credentials would only solve one small piece of the problem. Many people with verifiable credentials are still dead wrong and/or cannot communicate their ideas efficiently enough to be what I consider a good resource.

An even simpler solution already exists. Leading organizations like the Independent Oracle User’s Group could take it upon themselves to compile and publish lists of quality resources in their field. With some additional effort I bet these lists could be combined with Google’s search API to provide a web search which only searches a number of “verified” sites.

This type of compilation would not only provide a fantastic list of resources (especially for beginners) but would also shape search results by increasing the page ranking of sites which the organization identifies as good resources.

web2.0 web, internet, blog, wikipedia, free speach, net neutrality, online, anonymous

Last week Donald Burleson posted an article entitled The web is becoming a dictatorship of idiots. In it he references a Newsweek article which blasts Wikipedia as “no more reliable than the output of a million monkeys banging away at their typewriters” and claims “sites like Wikipedia, along with blogs, YouTube and iTunes, are rapidly eroding our legacy of expert guidance in favor of a ‘dictatorship of idiots.'”

I encourage you to read and share your opinions on Don’s article. Below is my response. My next article by the same title will have my response to his response.

Don,

Don’t you think there is some responsibility for the reader to be able to filter their sources for what they are? Is this a matter for legislation or education?

In my opinion (and in my own interest) I think everyone should be able to publish anything at anytime. If I post something that is completely ridiculous on my blog I expect people to tell me that. They might be right, I might be right, but either way at least it’s out there. Anyone can publish and anyone can respond.

Here’s a good example where Matt posted what he thought was a good idea of how to create auto-increment fields in Oracle without the use of Triggers.

I responded with a detailed article demonstrating why his method would not work and he followed up with another article with an updated method.

So the system can (and will eventually) balance itself. We’re in the information age and the flood gates have opened! I think it’s up to culture to catch up to technology. You and I know how to flush out good web resources. The rest of the world will catch up soon.

What do you think?

« Previous PageNext Page »