This sums up wikipedia quite well!!!
http://www.comedycentral.com/mother...ayVideo=72347&ml_playlist=&lnk=&is_large=true
http://www.comedycentral.com/mother...ayVideo=72347&ml_playlist=&lnk=&is_large=true
Not by every wing nut with an agenda or an axe to grind, having the ablity to change history with a key stroke.ha
we all knew that wikipedia was not neccessarily reliable though eh? fact is if something is contested, we can't really rely on it, although usually when a matter is contested, it says so on the page. Factual matters though can be relied on.
By the way how do you suppose a hard-copy encyclopedia is compiled?
This sums up wikipedia quite well!!!
http://www.comedycentral.com/mother...ayVideo=72347&ml_playlist=&lnk=&is_large=true
Quite right. If I quote Wikipedia, I also add other sources as backup. What I find handy about Wiki is that it says stuff in layman's language. For instance, I could post a Science Magazine article and a lot of people would only understand a tenth of the article (sometimes me included).ha
we all knew that wikipedia was not neccessarily reliable though eh? fact is if something is contested, we can't really rely on it, although usually when a matter is contested, it says so on the page. Factual matters though can be relied on.
By the way how do you suppose a hard-copy encyclopedia is compiled?
- abstract from SciMagA 2.91-billion base pair (bp) consensus sequence of the euchromatic portion of the human genome was generated by the whole-genome shotgun sequencing method. The 14.8-billion bp DNA sequence was generated over 9 months from 27,271,853 high-quality sequence reads (5.11-fold coverage of the genome) from both ends of plasmid clones made from the DNA of five individuals. Two assembly strategies--a whole-genome assembly and a regional chromosome assembly--were used, each combining sequence data from Celera and the publicly funded genome effort. The public data were shredded into 550-bp segments to create a 2.9-fold coverage of those genome regions that had been sequenced, without including biases inherent in the cloning and assembly procedure used by the publicly funded group. This brought the effective coverage in the assemblies to eightfold, reducing the number and size of gaps in the final assembly over what would be obtained with 5.11-fold coverage. The two assembly strategies yielded very similar results that largely agree with independent mapping data. The assemblies effectively cover the euchromatic regions of the human chromosomes. More than 90% of the genome is in scaffold assemblies of 100,000 bp or more, and 25% of the genome is in scaffolds of 10 million bp or larger. Analysis of the genome sequence revealed 26,588 protein-encoding transcripts for which there was strong corroborating evidence and an additional ~12,000 computationally derived genes with mouse matches or other weak supporting evidence. Although gene-dense clusters are obvious, almost half the genes are dispersed in low G+C sequence separated by large tracts of apparently noncoding sequence. Only 1.1% of the genome is spanned by exons, whereas 24% is in introns, with 75% of the genome being intergenic DNA. Duplications of segmental blocks, ranging in size up to chromosomal lengths, are abundant throughout the genome and reveal a complex evolutionary history. Comparative genomic analysis indicates vertebrate expansions of genes associated with neuronal function, with tissue-specific developmental regulation, and with the hemostasis and immune systems. DNA sequence comparisons between the consensus sequence and publicly funded genome data provided locations of 2.1 million single-nucleotide polymorphisms (SNPs). A random pair of human haploid genomes differed at a rate of 1 bp per 1250 on average, but there was marked heterogeneity in the level of polymorphism across the genome. Less than 1% of all SNPs resulted in variation in proteins, but the task of determining which SNPs have functional consequences remains an open challenge.
This article compares encyclopia britanica vs wikipeda http://networks.silicon.com/webwatch/0,39024667,39155109,00.htm.
"That averages out to 2.92 mistakes per article for Britannica and 3.86 for Wikipedia." Not too bad for a user submitted encylopedia.
This article compares encyclopia britanica vs wikipeda http://networks.silicon.com/webwatch/0,39024667,39155109,00.htm.
"That averages out to 2.92 mistakes per article for Britannica and 3.86 for Wikipedia." Not too bad for a user submitted encylopedia.
I doubt it!!!not bad at all. And i suspect the longer wikipedia exists the better it'll get
I doubt it!!!
The longer it exists, the greater the exposure to those that will yndoubtedly learn, that they can effectually change the perception of reality of many sheeple.
Be weiry of any sourse of information that footnotes itself, provides its own proof and so on.
The major difference between Wikipedia and the Encyclopedia Britanica, is...
Wikipedia, for the most part uses internal sources of proof.
Encyclopedia Britanica, sites institutions such as the Smythsonian, etc. Reputable sources.
That's not to say that wiki, doesn't source out side of itself, it does. But in many cases, I have found that source to provide footnote source reference back to Wikipedia. That is a flaw beyond contempt.
Either way you want to take it, sobeit, but don't be surprised when someone wipe their arse with your wikiality.
You must have used wiki to come up with that...lol...just kidding...going where no bran has gone before?