Starlink: The Future of the Internet Is Bright

A SpaceX illustration of its Starlink satellite – Credit: SpaceX

By Joshua Anderson

Many of us – especially in Orange County, CA – experience relatively strong internet service without issue. In a majority of urban areas in the country, internet service providers (ISPs) offer affordable high-speed internet usually ranging from 50-500 Mbps of download speed. To put that in perspective, a single 4k resolution movie stream takes on average 15 Mbps of bandwidth and a Zoom meeting with cameras and audio takes on average 2 Mbps of bandwidth. Although ISPs cannot guarantee maximum speeds at all times, we are often able to enjoy high-speed internet with only minor thought to how much bandwidth is available. Modern internet accessibility has come very far, and advancements are still being made.

Then the question many might ask is, why might we need to improve internet technology? 

The major benefit would be an increase in accessibility to less reached areas of the world. In contrast to big urbanized cities, rural areas typically have unbearably slow speeds of around 5-10 Mbps. In the global pandemic, many people in these rural areas have no option to work remotely because of the lack of access to quality internet. This can affect workers from being able to find a job, businesses from finding workers, and students from being able to attend universities while living at home. Regardless of a pandemic, remote work has great value to the economy and education. Improving the internet in these rural areas could greatly improve the global economy, productivity, and education.

To address this issue, a new up-and-coming technology called Starlink is being developed by the aerospace company, SpaceX. This technology is an attempt to improve the concept of satellite internet to provide broadband speeds to rural areas. Since its initial development in 2018, SpaceX has launched over 1,000 satellites in orbit for Starlink. The company conducts ongoing tests of the service showing results averaging within the range of 40 Mbps to 400 Mbps in various testing sites around the U.S. and Canada. Those numbers are expected to improve as SpaceX recently announced its goal for maximum speeds to reach 10 Gbps — over 20 times the speed of the max 400 Mbps recorded in their tests. That is the equivalent of fiber internet speeds in the middle of a large city. They are hoping to reach 600 Mbps by the end of the year.

Not only is Starlink striving towards nearly instantaneous download speeds in any location of the world, but they are also developing their infrastructure to include mobile client receivers. This means that you could take your high-speed internet with you in your RV on a road trip, to a rural worksite, or to anywhere in the world that may not be feasible for any current ISP to offer quality service. This feature is still in development, but the popular Canadian-based YouTube channel, Linus Tech Tips, has several videos performing tests of their own. These tests involve various quality assurance measures such as speeds, latency, and reliability.

Currently, SpaceX is focusing on bringing this technology to areas that need it the most, meaning mostly rural and remote areas. With their current pricing of $99 a month, it would be difficult to compete with ISPs in cities and suburbs where high-speed internet is already very affordable. Long-term goals for highly-populated areas are still unclear because the current technology would be unable to sustain quality service. SpaceX CEO, Elon Musk, said in an interview with podcast host, Joe Rogan, “Starlink is great for low to medium population density. But satellites are not great for high-density urban,” implying there should be no expectation for Starlink to have major support in major cities.

A technology such as Starlink is necessary to fill the gaps of the world’s current internet structure that currently only provide quality internet to highly urbanized areas. If SpaceX follows through on this project, it could change the lives of many around the world through improvements in productivity, communication, and accessibility. The Internet opens many opportunities for a higher quality of life in the modern age, and a service that can provide those opportunities to many new parts of the world will leave a major impact on their economies and livelihoods.

Joshua Anderson is a first-year graduate student at Chapman University studying Computational and Data Sciences. He is a technology columnist for The Hesperian.

What Does the Future Have in Store for Social Media?

By Joshua Anderson

The year 2020 has no doubt had its eventful moments of public prominence. With the presidential election now starting to leave the current news cycle, discussion of social media corporations has come to light due to the Senate hearings of Facebook and Twitter, as well as the release of the increasingly popular film “The Social Dilemma.” These high criticisms of social media platforms have been on the rise for several years now, but have recently received national attention when high profile conservative public figures such as Dan Bongino and Sen. Ted Cruz opted to leave Twitter for a start-up competitor, Parler. The reasoning behind the switch being substantial accusations of bias with disputes over the recent “fact-checking” crackdown on these major social media platforms. Given this dramatic debate concerning these platforms, the question that begs an answer is: “What is the future of social media?”

Launched in 2018, Parler is a Twitter-like platform that claims to be: “An unbiased social media focused on real user experiences and engagement. Free expression without violence and no censorship.” Facebook and Twitter have been unforgiving (and arguably biased) in their attempts to police “hate speech,” “violent content,” and “fake news.” In response, Parler has attracted many opposed to Big Tech’s method of enforcing controversial “exceptions” to the First Amendment. 

Although, Parler itself has had issues with conflict over censorship. Parler has been accused of banning users in Twitter-like fashion over strange clauses in the community guidelines. The app has been banning a number of people for sexual content, foul language, and oddly, posts relating to “fecal matter.” This has led many who are hesitant to join the platform to be skeptical of the “free speech platform.” 

Parler not only claims to be a “censorship-free platform,” but also promises to not sell user data to any outside entities. They state in their privacy document that they not only collect data you provide, but also implicit data they can collect from user activity including the following: location, device information, usage, contacts, and information from cookies (or similar technologies). They list off how they use your data, which consists of mostly ordinary practices, but also state that – regardless of the clause to not sell user data – they reserve the right to share it with third parties for assistance in analysis.

Taking all into consideration, the more that is revealed about Parler, the more it seems to be the same package with a new face. So, if a new platform may not resolve the current tensions in social media, what will? The answer to this will likely reshape social media to be something completely different.

Over the last several years, there has been some non-partisan support of reforming the Communications Decency Act of 1996 (CDA), originally put into law under the Clinton Presidency. The original intent of this law was to regulate indecent content flooding the internet, such as pornography. Some sections of the CDA regarding the ban of “adult content” were unanimously struck down by the Supreme Court for certain clauses violating the First Amendment. The most relevant active legislation from the CDA is Section 230. This law was in the national spotlight as recently as 2018 due to a series of high profile lawsuits against Facebook. The law has continued to be disputed since.

Section 230 of the CDA creates legal protections for internet platforms inconsistent with traditional American law. These platforms are immune to liability while retaining the privilege of monitoring their content. In some ways, this has been helpful as it allows a company like Yelp to remove reviews from apparent non-customers without worry, yet also allowing the current, controversial fact-checking censorship with no accountability. This differs from traditional American legislation on communication because it breaks the convention of a publisher, distributor, and platform distinction. In short, this distinction historically has separated liability with moderation on a public communication outlet, i.e., if a company chooses to regulate public content, they will be legally liable for what they do not remove. 

From the public’s increasing distrust of Big Tech, there has been significant non-partisan support to reform this legislation, including President Trump and apparent President-elect Biden, yet there has been a divide on the next step. Some are pushing for a revived net neutrality approach, which was abandoned in 2017, while others push for the traditional publisher-versus-platform approach. Both have their drawbacks. Those advocating for net neutrality would revert the nationwide increased internet speeds, give the government access to monitor internet traffic, and hold social media companies liable for what is on their sites, redefining social media as we know it. Those advocating to establish social media companies as platforms rather than publishers would revoke privileges of big tech to police without providing an efficient alternative to address violations of the First Amendment, and provide no immediate accountability for actions already taken by these companies.

I personally believe there is no objective solution as every route has its drawbacks. Either way, everyone agrees something needs to change. Over the next four years, we will likely see a dramatic change causing Parler, Twitter, and Facebook to be something other than what we know today. The uncertainty will most definitely be answered by what we as a country decide is more valuable: protecting freedom or ensuring security.

Joshua Anderson is a first-year graduate student at Chapman University studying Computational and Data Sciences. He is a technology columnist for The Hesperian.

How to Identify False Statistics: Make an Informed and Accurate Vote!

Photo by Ruthson Zimmerman

By Joshua Anderson

Since the 1970s, the world has been in the “Information Age” with mass advancements in electronics, especially computers. The Information Age has led to the immense economic and cultural value of information technology. In more recent years, the continual advancement of computing power along with the normalization of receiving huge amounts of information on a daily basis has driven statistical and mathematical modeling into everyday conversations. Media outlets constantly discuss statistical insights about racial issues, the stock market, business decisions, and nearly any other topic that comes to mind. With this overflow of information comes an unprecedented quantity of false information. With the 2020 presidential election coming up, there is a clear mental tug-of-war between the political parties, which are using statistical information to convince voters to support their party. I hope to discuss some areas in statistics and statistical modeling that are consistently misused in hopes that you can critically evaluate information presented to you and cast your vote in confidence for your candidate.

Unsurprisingly, there are many ways that even the most basic forms of statistics are intentionally misleading. One of the most common mistakes is through data visualization. Graphs are extremely useful and much more interesting to observe than raw data, yet must be used with extreme caution to present data truthfully. Although there are plenty of ways to skew graphs, one concrete tell of a misleading graph is the axes. Let us take a look at two different examples (note this is not real data):

The bar graph describes the amount of new jobs and unemployment claims in the auto industry. You might look at this and think that there are a significant number of more unemployment claims than new jobs. On the other hand, the scatter plot describes a relationship between modern and older cars with respect to their price and mileage. Looking at this may imply that lower mileage cars are increasingly priced higher, especially in older cars. Now let us look at these graphs with different axes:

These charts are very different, yet we are using the same exact data. Now the bar graph appears to show a drastic difference in the number of new jobs versus unemployment claims. The scatter plot now looks like there is hardly a relationship between mileage and price. Even though the same data is being presented, the apparent implications are vastly different. 

Politicians and political activists in particular have used strategies such as this to manipulate the facts to fit their narrative. They are still indeed using factual data, yet the data is presented in a light that changes the overall story. Manipulating statistics in such a way can come in many forms other than visualizations. A few examples include: inaccuracies and biases that come from how data is collected, ignoring uneven distribution of certain categories, not citing a reliable source, and failing to state specific conditions under which the statistic requires to be true.

Another brief example of simple statistics being used incorrectly is from the last presidential debate. President Trump claimed that coronavirus has been at its worst in states that are predominantly led by Democratic leadership while former Vice President Biden claimed that coronavirus has been at its worst in predominantly Republican-led states. In fact, they are both correct, but refused to use the context referenced by the other candidate. Trump was referring to a large portion of cases from the first two spikes in the U.S. being from New York, Delaware, California, and Illinois. Biden was referring to the large portion of cases from this fall in America’s third spike from midwestern states like Wisconsin, South Dakota, and Alabama. Both filter information that supports their point. Other methods used to falsify statistics can be learned by taking a college level introductory course in statistics. 

What I find to be more difficult for people to understand and critically evaluate is statistical modeling. In recent years so many cases of uncertainty have been addressed using “models.” These are more advanced mathematical functions that try to make sense of data presented to them. Typically models are used to make predictions or provide inference about the data. We see these ubiquitous in media coverage of uncertain events such as predictions for coronavirus, forecasts of the presidential election, and disparities in income based on gender or race. The issue with this relatively new method of understanding the world around us in the Information Age is the lack of accountability we give the people who make these models. Dr. Anthony Fauci recently expressed this sentiment at a press briefing of the pandemic saying, “I know my modeling colleagues are not going to be happy with me, but models are as good as the assumptions you put into them.” This emphasizes the fact that most models are used in environments of uncertainty and change as we learn more about the problem.

By far, the most crucial mistake made by journalists and reporters in presenting these models is that they overlook the fact that association does not imply causation. In most statistical models, a dataset consists of some sample of a population and is used to calculate coefficients to their respective mathematical equations. Data scientists either use that model to make predictions, or use those coefficients to infer the value of a given variable. Inference is where most non-technical people will misinterpret results. These models, along with these coefficients, output a metric called a p-value, which in short is used to determine the probability that a given variable (independent variable) will affect what we are trying to predict (dependent variable) due to chance rather than from correlation. When that number is low enough, statisticians will declare the association between the independent and dependent variables is likely not due to chance (i.e., they are correlated). 

The issue that arises when the media tries to explain this association is that they assume since the effect one factor has on our prediction is not likely due to chance, it must be the cause. This is absolutely and wholly incorrect. For example, if a group of individuals was to contract an illness, they likely would see a doctor. If that illness was severe enough, they would be admitted into the hospital. A statistical model that tries to predict whether an individual will be admitted into the hospital that uses visits to the doctor as an independent variable will likely show they are correlated. Using the logic from the media, one could claim this model proves that if you visit the doctor, you are more likely to be hospitalized. Obviously this is false, but why? Take a look at this diagram:

Our predictive model only gives us statistical inference showing correlation. If we wish to prove an independent variable is the cause of our dependent variable, further analysis is needed. Intuitively we know that the illness is the cause of hospital admissions – not visits to the doctors – but in many instances, these models are used to find unknown relationships. These models often are used to infer causal relationships when all they have proven is association. This has been a major contributor to the spike in false information in recent years.

Statistics is not an easy science and much of the information I discussed may have been difficult to understand if you are not from a technical background. Most of what I study is the theory behind constructing these models. So, if there is anything you should take away from this, it is that there are countless examples of these models either being misinterpreted, being deliberately misrepresented, or missing the whole picture causing significant misdirection in understanding uncertainty. If we wish to find truth in the age of false information, we can no longer take information at face value, but rather we should critically evaluate how it is presented to determine its honesty. When you fill out your ballot this year, I hope you take into consideration how statistical persuasion may have a role in the information provided by political groups to aid you in making the best decision.

Joshua Anderson is a first-year graduate student at Chapman University studying Computational and Data Sciences. He is a technology columnist for The Hesperian.

TikTok Ban: Why Everyone Should Care About Their Data Privacy

Photo from Pixabay

By Joshua Anderson

From Facebook data breach lawsuits to the enactment of the General Data Protection Regulation (GDPR), data privacy has been an intensifying topic in recent political discourse globally. The expanding issue has revealed the lack of awareness in the general public around big data. Additionally, recent data privacy controversies have revealed the younger generation’s complacency and disregard for data privacy in the age of unregulated technology. 

TikTok, a short-form video social media app, recently entered the spotlight of the data privacy discussion. On August 14, 2020, President Trump issued an executive order requiring ByteDance, the Chinese government-owned parent company of TikTok, to cease operations within the United States if they do not sell TikTok to a U.S. entity by November 12. Regardless if this move by the Trump administration was an essential step or a ludicrous misstep, this is yet another major plight in the world of data privacy.

The Trump administration, along with many in Congress, has accused TikTok of sharing user data with the Chinese government, which would violate the privacy and security of U.S. citizens. The Trump administration’s accusations stem from Beijing’s legal ability to require the acquisition of user data from ByteDance, but all allegations of this have been denied by ByteDance. In light of this conflict, the Chinese government concurrently amended its export control rules for the first time since 2008. This change affected 25 different categories of goods, including artificial intelligence assets such as TikTok’s user interest algorithm. Therefore, this would significantly complicate a sale of TikTok to a U.S. firm.

In response to these complications, a preliminary deal was confirmed on September 19 with Oracle and Walmart. Since the deal is not yet finalized, TikTok is still at risk of being banned. On September 27, a federal judge delayed any change from happening for the U.S. user base of over 100 million people until a full court hearing.

This is not the first time a camera-based app has raised international data privacy concerns in the U.S. In 2019, the United States F.B.I. investigated FaceApp, a popular photo filter app created by the Russian based company, Wireless Lab. The investigation was based on suspicions of user images being sent to the Russian government. The suspicions came from a clause in FaceApp’s terms and conditions that established their right to modify, reproduce, and publish any of the images users processed through its artificial intelligence algorithm. This case provides a precedent that tech companies can be used by (potentially-hostile) foreign nations to spy on and gather information from U.S. citizens.

Big Data

Big data is defined as “an accumulation of data that is too large and complex for processing by traditional database management tools.” Companies expend tremendous amounts of resources to obtain as much data as they can about their users. TikTok in particular has massive amounts of video data posted by its users in addition to implicit data like user location, age, etc. 

Why would companies pay so much attention to large and overly complex sets of data? 

In contrast to the general public, technologists and business owners understand that data not only has immense monetary value, but also informational value. To these people, data is seen as unprocessed information that can be extracted into compelling insights. Because big data is too complex for any one human to extract insights from in a traditional manner, tools like artificial intelligence – which can offer predictive qualities and causal relationships – have become increasingly popular. This information that can be extracted from big data is now easily accessible to any person who has access to a computer and time to learn Python on YouTube.

With advancement in technology – particularly data science – the power of image processing has increased exponentially. Images contain massive amounts of information that, with new technologies such as Convolutional Neural Networks (CNNs) and advancing computer hardware, can wield unprecedented results. At Chapman University, Dr. Erik Linstead, Associate Dean of Fowler Engineering, and members of the Machine Learning Assistive Technology lab (MLAT) recently published a study, “A Deep Learning Approach to Identifying Source Code in Images and Video,” that utilizes this technology to analyze thousands of video images. Their goal was to identify whether any given image or video contained code – specifically Java – handwritten, typed, or even only partially visible. The models used achieved 85.7 to 98.7 percent accuracy in their classifications.

Now, taking a look at the content of TikTok, technologies such as CNNs can be used to classify objects in videos from the tie on someone’s neck to the model of a car on the street. Specific inquiries about a location, person, item, etc. can be answered given enough videos of it. This is not even taking into account the IP addresses, geolocation-related data, browsing history, and other user data that TikTok states in their terms and conditions that they automatically collect.

Privacy

The general sentiment around the banning of TikTok – especially among the younger population – is something along the lines of: “They have all my data anyways, so why does it matter?” Indifference can be dangerous. In addition to the aforementioned national security risks, the evaporating care for personal privacy has historically led to declining defense of privacy in how the courts interpret our legislation.

Katz v. United States (1967) was a pivotal application of judicial review establishing the main reference for understanding privacy in the modern era. In the majority opinion, Justice Potter Stewart wrote: 

“For the Fourth Amendment protects people, not places. What a person knowingly exposes to the public, even in his own home or office, is not a subject of Fourth Amendment protection. But what he seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.”

Not too long after, the case Smith v. Maryland (1979) ruled that telephone companies could record phone numbers contacted by a customer rejecting the idea of a “legitimate expectation of privacy” saying, “We doubt that people in general entertain any actual expectation of privacy in the numbers they dial.” This ruling shows that the culture and expectations of the country around data privacy have influence on how we are governed.

In response to an increasing value of private data – especially regarding data in the 21st century – we have seen major advancements in legislation like GDPR, which also inspired the California Consumer Privacy Act (CCPA). These laws add immense restrictions on what a company can do with a customer’s data without their consent. While some people may have differing opinions on if these regulations are productive or counterintuitive, the support for legislation such as GDPR shows a collective desire for privacy concerning personal data.

Our expectation of privacy has an effect on the laws that govern us, and there are many more amoral utilizations of data than the average person realizes. TikTok is only a chapter in the story of modern data regulation. The rising generation’s posture towards data privacy – whether it be indifference or acknowledgment –  will be ever more important in the biggest decisions yet to be made in data regulation.

Joshua Anderson is a first-year graduate student at Chapman University studying Computational and Data Sciences. He is a technology columnist for The Hesperian.