The Journey of #100DaysofCode (@eliehannouch)

100daysofcode - Day 53

Hello friends the 3rd day in our second half is already here, and as everyday the counter :arrows_counterclockwise: is increasing and with each increment a lot of new informations get ready to be shared with our amazing community. :star_struck::heart:

In today post we will talk about the agile effort :muscle: estimation techniques, by diving :diving_mask: in each of them and exploring it’s work process.

There are all kinds of techniques to use when estimating effort in an Agile way. Effective relative effort estimation leads to successful and predictable sprint outcomes, which leads to a successful project overall. Generally speaking, the main steps to Agile estimation are the same, even if the specific approach varies. Some examples of Agile estimation techniques are:

  • Planning Poker
  • Dot Voting
  • The Bucket System
  • Large/Uncertain/Small
  • Ordering Method

Planning Poker :black_joker:

  • This particular method is well-known and commonly used when Scrum teams have to make effort estimates for a small number of items (under 10). Planning Poker is consensus-based, meaning that everyone has to agree on the number chosen. In this technique, each individual has a deck of cards with numbers from the Fibonacci sequence on them. The Fibonacci sequence is where a number is the sum of the last two numbers (e.g., 0, 1, 2, 3, 5, 8, 13, and so on).

  • Sometimes, Planning Poker decks also include cards with coffee cups and question marks on them. The question mark card means that the person doesn’t understand what is being discussed or doesn’t have enough information to draw a conclusion. The coffee cup card means that the person needs a break.

  • The Planning Poker strategy is used in Sprint Planning meetings. As each Product Backlog item/user story is discussed, each team member lays a card face down on the table. Then, everyone turns their card over at the same time and the team discusses the estimates, particularly when they are far apart from one another. By first hiding the estimates, the group avoids any bias that is presented when numbers are said aloud. Sometimes, when hearing numbers aloud, people react to that estimate or the estimator themselves, and it changes what their initial thought may have been. In Planning Poker, teams can easily avoid that bias.

Dot Voting :green_square: :blue_square: :orange_square: :red_square:

  • Dot Voting, like Planning Poker, is also good for sprints with a low :chart_with_downwards_trend: number of Sprint Backlog items. In Dot Voting, each team member starts with small dot stickers, color-coded by the estimated effort required (e.g., S=green, M=blue, L=orange, XL=red). The items or user stories are written out on pieces of paper placed around a table or put up on the wall. Then, team members walk around the table and add their colored stickers to the items.

The Bucket System :rocket:

  • The Bucket System is helpful for backlogs with many items since it can be done very quickly. In fact, a couple hundred items can be estimated in just one hour with the Bucket System. The Bucket System is an effective strategy for sizing items because it explores each item in terms of pre-determined “buckets” of complexity. Keep in mind that these buckets are metaphorical; this strategy doesn’t require the use of actual buckets, and instead uses sticky notes or note cards as buckets.

  • In this technique, the team starts by setting up a line of note cards down the center of the table, each marked with a number representing a level of effort. Then, the team writes each item or user story on a card. Each person draws and reads a random item, then places it somewhere along the line of numbered note cards. There is no need to discuss further with the team. If a person draws an item that they do not understand, then they can offer it to someone else to place. Additionally, if a person finds an item that they think does not fit where it was placed, they can discuss it with the team until a consensus about a more accurate placement is reached. Team members should spend no more than 120 seconds on each item.

Large/Uncertain/Small :grin::boom:

  • Large/Uncertain/Small is another quick method of rough estimation. It is great for product backlogs that have several similar or comparable items.
    This is the same general idea as the Bucket System, but instead of several buckets, you only use three categories: large, uncertain, and small. Starting with the simpler, more obvious user stories, the team places the items in one of the categories. Then, the team discusses and places more complex items until each is assigned to a category.

Ordering Method :shopping_cart:

  • The Ordering Method is ideal for projects with a smaller team and a large number of Product Backlog items. First, a scale is prepared and items are randomly placed ranging from low to high. Then, one at a time, each team member either moves any item one spot lower or higher on the scale or passes their turn. This continues until team members no longer want to move any items.

Characteristics of effective estimation :innocent::star:

Regardless of which technique your team chooses, there are several important characteristics the techniques share that lead to effective estimation:

  • Avoids :x: gathering false precision of estimates. In Scrum, assigning rough estimates results in more accuracy across the project. Therefore, if the team focuses on identifying relative estimates—rather than a team having a lengthy debate about whether a task will take seven or 10 days of work—the team saves time and avoids potentially missing deadlines.

  • Avoids :x: anchoring bias. Many of these techniques (e.g., Planning Poker) keep the initial estimate private, which allows team members to form an independent opinion on the estimate before sharing their thoughts with the team. This prevents a known phenomenon called anchoring bias, where individuals find themselves compelled to put forth estimates similar to others in the room to avoid embarrassment.

  • Promotes :studio_microphone: inclusivity. These group estimation techniques not only lead to better estimates but also help the team develop trust and cohesiveness.

  • Leads to effort :world_map: discovery. Estimating in these dynamic ways can help the team uncover strategies to get items completed which might otherwise not have been revealed.

4 Likes

100daysofcode - Day 54

Hello friends the 54th day is already here. Today was not a learning day for me. As it was a special one, talking with the tunisian :tunisia: community online in a 2h workshop. Invited by there community organizers, it was a honor for me talking about MongoDB :four_leaf_clover: where we dived together on how to structure and model the data to derive the user requirements, that help us designing an efficient schema for our system. Moving to Atlas :earth_americas:, exploring the different deployment options to take the maximum advantage from our multi cloud database allowing our system to scale with ease.

Here are some sneak peeks :camera_flash: from the e-workshop, with a special thanks to @Darine_Tleiss :heart: and @abedattal for joining me on there off day facilitating the session :rocket:

4 Likes

100daysofcode - Day 55

Hello friends the 55th day is here, let learn some amazing stuffs and extend our knowledge. In today post we will explore the :six: major steps in the data analytics process, that help us as project managers understanding the business’ needs and the challenges to determine the effective solutions. :hugs::innocent:

There are six main steps involved in data analysis: Ask, prepare, process, analyze, share and act. Let’s break these down one by one.

  • During the Ask :grey_question: phase, ask key questions to help frame your analysis, starting with: What is the problem? When defining the problem, look at the current state of the business and identify how it is different from the ideal state. Usually, there is an obstacle in the way or something wrong that needs to be fixed. At this stage, you want to be as specific as possible. You also want to stay focused on the problem itself, not just the symptoms.

  • Another part of the Ask stage is identifying your stakeholders and understanding their expectations. There can be lots of stakeholders on a project, and each of them can make decisions, influence actions, and weigh in on strategies. Each stakeholder will also have specific goals they want to meet. It is pretty common for a stakeholder to come to you with a problem that needs solving. But before you begin your analysis, you need to be clear about what they are asking of you.

  • After you have a clear direction, it is time to move to the Prepare stage. This is where you collect and store the data you will use for the upcoming analysis process. :fried_egg::rocket:

  • This stage is when it is time to Process your data . In this step, you will “clean” :wastebasket: :broom: your data, which means you will enter your data into a spreadsheet, or another tool of your choice, and eliminate any inconsistencies and inaccuracies that can get in the way of results. While collecting data, be sure to get rid of any duplicate responses or biased data. This helps you know that any decisions made from the analysis are based on facts and that they are fair and unbiased.

  • During this stage, it is also important to check the data you prepared to make sure it is complete and correct and that there are no typos or other errors.

  • Now it is time to Analyze :bar_chart:. In this stage, you take a close look at your data to draw conclusions, make predictions, and decide on next steps. Here, you will transform and organize the data in a way that highlights the full scope of the results so you can figure out what it all means. You can create visualizations using charts and graphs to determine if there are any trends or patterns within the data or any need for additional research.

  • Once you have asked questions to figure out the problem—then prepared, processed, and analyzed the data—it is time to Share your findings. In this stage, you use data visualization to organize your data in a format that is clear and digestible for your audience. When sharing, you can offer the insights you gained during your analysis to help stakeholders make effective, data-driven decisions for solving the problem.

  • And finally, you are ready to Act! In the final stage of your data analysis, the business takes all of the insights you have provided and puts them into action to solve the original business problem.
5 Likes

100daysofcode - Day 56

Hello friends, 56/100 is here, extra knowledge are required to wrap the day. In yesterday :spiral_calendar: post we explored the major steps in the data analytics process, that help us as project managers understanding the business’ needs and the challenges to determine the effective solutions. Today will talk about the best steps to successful end-user adoption :hugs: :innocent: :muscle:

❶ Get an executive champion :star_struck:

  • Executive support is key to getting buy-in. People look to your leadership for direction, which is why it’s
    important to not only have your executives support your new technology, but use it as well. Leverage executive support at launch by having them send rollout communications, or have them record a video to include in new- hire onboarding. An executive’s level of involvement often can be an indication of the success your new technology will have across your organization. :trophy:

❷ Form a cross-functional project team :technologist::woman_technologist:

  • The needs across different departments in an organization can vary, and each may realize different benefits from your solutions. For example, a mobile sales team will appreciate secure access to their apps and data while they’re on the road, and product developers will appreciate the ability to collaborate on files with peers working at remote locations around the world. This is why we recommend forming a cross-functional project team who can evangelize, reinforce leadership’s message, and provide team-specific training as they interface with end users daily. :smiling_face_with_three_hearts:

❸ Build an awareness campaign that gets everybody excited :innocent:

  • A good communications plan includes a variety of promotional tactics that both inform and generate
    excitement about your solutions. Also keep in mind that you have two groups to consider in your plan:
    your project team and your end users.

    1. Your plan for your project team should include the rollout progress, milestones, and actions,
      and acknowledge difficulties and setbacks.

    2. Your end user plan should include a cadence of emails from leadership that builds awareness and excitement about your upcoming rollout.

  • In addition to email, it’s important to use additional tactics to keep your users interested and eager for your launch. Some examples include:

    1. Posting signage in high-traffic locations, like by the printers or in the break room.

    2. Creating a video to loop on a screen by the elevators or in the cafe

    3. Creating a “countdown to launch” app on your intranet’s homepage

    4. Hosting a preview party to encourage excitement, showcase upcoming trainings, introduce your project team, etc.

❹ Make training role-specific—and fun :tada:

  • Center every session around how and why this new tool paves the way for users to succeed in their particular duties. During a recent research study, we learned that not only do end users want training on how to use new tools, but they want to understand the “why”—what benefit is this new tool going to provide to me? Make sure this is loud and clear in your trainings. Too often, trainings are created with a one-size-fits-all mentality and ultimately fail. :boom::x:

  • When working with your training team, encourage them to train to the roles or departments of your users. And because you have secured executive sponsorship for your launch, utilize their support when building out your training sessions. Consider asking for funding to host interactive sessions, such as lunch-and-learns, snacks and trivia, or an incentives program in which employees earn rewards for active use. Finally, make sure all training materials are easily accessible and repeated often. :smiling_face_with_three_hearts:

❺ Ask for feedback :flushed:

  • It’s important to build a feedback loop into your plan. Seeking candidate feedback about their experiences
    shortly after deployment is a great way to identify people who may not have been properly trained. It also will show them that their concerns are valued and will keep them engaged in the adoption process. :fist::wave:
4 Likes

100daysofcode - Day 57

Hello friends, 57/100 is here, the counter is incrementing and a lot of new knowledge are coming. In yesterday post, we talked about the best steps to successful end-user adoption. While today we will dive in data anonymization and its importance while dealing with users data. :smiling_face_with_three_hearts: :star_struck:

What is data anonymization ? :thinking:

  • Data anonymization is the process of protecting people’s private or sensitive data by eliminating that kind of information. Typically, data anonymization involves blanking, hashing, or masking personal information, often by using fixed-length codes to represent data columns, or hiding data with altered values.

Your role in data anonymization :man_technologist: 🫵

  • Organizations have a responsibility to protect their data and the personal information that data might contain. As a data analyst, you might be expected to understand what data needs to be anonymized, but you generally wouldn’t be responsible for the data anonymization itself. A rare exception might be if you work with a copy of the data for testing or development purposes. In this case, you could be required to anonymize the data before you work with it.

What types of data should be anonymized?

  • Healthcare :hospital: and financial :bank: data are two of the most sensitive types of data. These industries rely a lot on data anonymization techniques. After all, the stakes are very high. That’s why data in these two industries usually goes through de-identification, which is a process used to wipe data clean of all personally identifying information.

Data anonymization is used in just about every industry. That is why it is so important for data analysts to understand the basics. Here is a list of data that is often anonymized:

  • Telephone numbers :phone:

  • Names

  • License plates and license numbers 🪪

  • Social security numbers :id:

  • IP addresses :calling:

  • Medical records 🩻

  • Email addresses :incoming_envelope:

  • Photographs :sparkler:

  • Account numbers :receipt:

6 Likes

100daysofcode - Day 58

Hello friends, the eight :8ball: day from our second half is already here. Our daily counter is increasing and a lot of new knowledge are required in each increment :repeat_one:. Yesterday :spiral_calendar: we discussed the data anonymization process and its importance while dealing and managing user’s data. Today data modeling levels :rocket: and techniques. :hugs:

What is data modeling?

  • Data modeling is the process of creating diagrams :chart_with_downwards_trend: :bar_chart: that visually represent how data is organized and structured. These visual representations are called data models. You can think of data modeling as a blueprint of a house. At any point, there might be electricians, carpenters, and plumbers using that blueprint. :wave::grinning:

  • Each one of these builders has a different relationship to the blueprint, but they all need it to understand the overall structure of the house. Data models are similar; different users might have different data needs, but the data model gives them an understanding of the structure as a whole. :man_technologist::woman_technologist:

Levels of data modeling

  • Each level of data modeling has a different level of detail.

    1. Conceptual data modeling gives a high-level view of the data structure, such as how data interacts across an organization. For example, a conceptual data model may be used to define the business requirements for a new database. A conceptual data model doesn’t contain technical details. :fist:

    2. Logical data modeling focuses on the technical details of a database such as relationships, attributes, and entities. For example, a logical data model defines how individual records are uniquely identified in a database. But it doesn’t spell out actual names of database tables. That’s the job of a physical data model. :hugs:

    3. Physical data modeling depicts how a database operates. A physical data model defines all entities and attributes used; for example, it includes table names, column names, and data types for the database. :innocent:

Data-modeling techniques :upside_down_face:

  • There are a lot of approaches when it comes to developing data models, but two common methods are the Entity Relationship Diagram (ERD) and the Unified Modeling Language (UML) diagram.

    1. ERDs are a visual way to understand the relationship between entities in the data model.

    2. UML diagrams are very detailed diagrams that describe the structure of a system by showing the system’s entities, attributes, operations, and their relationships.

5 Likes

100daysofcode - Day 59

Hello friends, day++ and a new dive in some new topics in the data world, yesterday we talked about the data modeling levels and the different techniques to do it. Today will discover a new amazing topic “Data limitations:smiling_face_with_tear::innocent:

Data is powerful :fist:, but it has its limitations :sob:. Has someone’s personal opinion found its way into the numbers? Is your data telling the whole story? Part of being a great data analyst :bar_chart: is knowing the limits of data and planning for them. This reading explores how you can do that.

  • If you have incomplete or nonexistent data, you might realize during an analysis that you don’t have enough data to reach a conclusion. Or, you might even be solving a different problem altogether! For example, suppose you are looking for employees who earned a particular certificate but discover that certification records go back only two years at your company. You can still use the data, but you will need to make the limits of your analysis clear. You might be able to find an alternate source of the data by contacting the company that led the training. But to be safe, you should be up front about the incomplete dataset until that data becomes available.

  • If you’re collecting data from other teams and using existing spreadsheets, it is good to keep in mind that people use different business rules. So one team might define and measure things in a completely different way than another. For example, if a metric is the total number of trainees in a certificate program, you could have one team that counts every person who registered for the training, and another team that counts only the people who completed the program. In cases like these, establishing how to measure things early on standardizes the data across the board for greater reliability and accuracy. This will make sure comparisons between teams are meaningful and insightful.

  • Dirty data refers to data that contains errors. Dirty data can lead to productivity loss, unnecessary spending, and unwise decision-making. A good data cleaning effort can help you avoid this. As a quick reminder, data cleaning is the process of fixing or removing incorrect, corrupted, incorrectly formatted, duplicate, or incomplete data within a dataset. When you find and fix the errors - while tracking the changes you made - you can avoid a data disaster. You will learn how to clean data later in the training.

  • Compare the same types of data

  • Visualize with care

  • Leave out needless graphs

  • Test for statistical significance

  • Pay attention to sample size

  • In any organization, a big part of a data analyst’s role is making sound judgments. When you know the limitations of your data, you can make judgment calls that help people make better decisions supported by the data. Data is an extremely powerful tool for decision-making, but if it is incomplete, misaligned, or hasn’t been cleaned, then it can be misleading. Take the necessary steps to make sure that your data is complete and consistent. Clean the data before you begin your analysis to save yourself and possibly others a great amount of time and effort.
5 Likes

100daysofcode - Day 60

Hello friends, a new day is here, some :new: knowledge in the amazing data world are required , to wrap the day in an amazing way. Yesterday we talked about the data limitations and how to overcome them. today we will discuss another interesting topic: The Meta data and it’s importance :hugs::fist:

Metadata is as important as the data itself :thinking::hugs:

  • Take a look at any data you find. What is it? Where did it come from? Is it useful? How do you know? This is where metadata comes in to provide a deeper :diving_mask: understanding of the data. To put it simply, metadata is data about data. In database management, it provides information about other data and helps data analysts :chart_with_downwards_trend: interpret the contents of the data within a database.

  • Regardless of whether you are working with a large or small quantity of data, metadata tells the who, what, when, where, which, how, and why of data :minidisc::floppy_disk:.

Elements of metadata :child: 𖢞

  • Title and description :hugs:

    • What is the name of the file or website you are examining? :file_folder::desktop_computer:
    • What type of content does it contain? :memo:
  • Tags and categories :label: :bookmark:

    • What is the general overview of the data that you have?
    • Is the data indexed or described in a specific way?
  • Who created it and when :writing_hand: :calendar:

    • Where did the data come from, and when was it created?
    • Is it recent, or has it existed for a long time?
  • Who last modified it and when :lock_with_ink_pen: :spiral_calendar:

    • Were any changes made to the data?
    • If yes, were the modifications recent?
  • Who can access or update it :old_key: :globe_with_meridians:

    • Is this dataset public? :earth_americas:
    • Are special permissions needed to customize or modify the dataset? :closed_lock_with_key:

Metadata examples :star_struck:

In today’s digital world, metadata is everywhere, and it is becoming a more common practice to provide metadata on a lot of media and information you interact with. Here are some real-world examples of where to find metadata:

  • Photos: :camera_flash: :fireworks: Whenever a photo is captured with a camera, metadata such as camera filename, date, time, and geolocation are gathered and saved with it.

  • Emails: :envelope_with_arrow: :email: When an email is sent or received, there is lots of visible metadata such as subject line, the sender, the recipient and date and time sent. There is also hidden metadata that includes server names, IP addresses, HTML format, and software details.

  • Spreadsheets and documents :page_facing_up: : Spreadsheets and documents are already filled with a considerable amount of data so it is no surprise that metadata would also accompany them. Titles, author, creation date, number of pages, user comments as well as names of tabs, tables, and columns are all metadata that one can find in spreadsheets and documents.

  • Websites :spider_web: :desktop_computer:: Every web page has a number of standard metadata fields, such as tags and categories, site creator’s name, web page title and description, time of creation and any iconography.

  • Digital files :dvd::floppy_disk:: Usually, if you right click on any computer file, you will see its metadata. This could consist of file name, file size, date of creation and modification, and type of file.

3 Likes

100daysofcode - Day 61

Hello friends, 61/100 is already here, new informations are ready and a lot of fun is on the way. After talking yesterday about the metadata and it’s types, Today we will discuss the structure of the data and what are the different types. :star_struck::innocent:

Data is everywhere and it can be stored in lots of ways.
Two general categories of data are:

  • Structured data: Organized in a certain format, such as rows and columns.

  • Unstructured data: Not organized in any easy-to-identify way.

  • Example: when you rate your favorite restaurant :man_cook::hamburger: online, you’re creating structured data. But when you use Google Earth :earth_africa: to check out a satellite image of a restaurant location, you’re using unstructured data.

Structured vs Unstructured data

  • Structured data: As we described earlier, structured data is organized in a certain format. This makes it easier to store and query for business needs. If the data is exported, the structure goes along with the data.

  • Unstructured data: Unstructured data can’t be organized in any easily identifiable manner. And there is much more unstructured than structured data in the world. Video and audio files, text files, social media content, satellite imagery, presentations, PDF files, open-ended survey responses, and websites all qualify as types of unstructured data.

3 Likes

100daysofcode - Day 62

Hello friends, day++, an increasing counter, and a daily dose of knowledge. Yesterday we discussed the different data structures while today we will discuss what dirty data is all about :smiling_face_with_tear: :wastebasket:.

What is dirty data? :thinking:

  • Dirty data, also known as rogue data, are inaccurate, incomplete or inconsistent data, especially in a computer system or database :smiling_face_with_tear:

  • Dirty data can contain such mistakes as spelling or punctuation errors, incorrect data associated with a field, incomplete or outdated data, or even data that has been duplicated in the database. :sob:

Types of dirty data :star_struck:

Duplicate data :cd: :floppy_disk:

Description Possible causes Potential harm to businesses
Any data record that shows up more than once Manual data entry, batch data imports, or data migration Skewed metrics or analyses, inflated or inaccurate counts or predictions, or confusion during data retrieval

Outdated data :spiral_calendar:

Description Possible causes Potential harm to businesses
Any data that is old which should be replaced with newer and more accurate information People changing roles or companies, or software and systems becoming obsolete Inaccurate insights, decision-making, and analytics

Incomplete data :x:

Description Possible causes Potential harm to businesses
Any data that is missing important fields Improper data collection or incorrect data entry Decreased productivity, inaccurate insights, or inability to complete essential services

Incorrect/inaccurate data :sob:

Description Possible causes Potential harm to businesses
Any data that is complete but inaccurate Human error inserted during data input, fake information, or mock data Inaccurate insights or decision-making based on bad information resulting in revenue loss

Inconsistent data :slightly_frowning_face:

Description Possible causes Potential harm to businesses
Any data that uses different formats to represent the same thing Data stored incorrectly or errors inserted during data transfer Contradictory data points leading to confusion or inability to classify or segment customers
4 Likes

100daysofcode - Day 63

Hello friends, a new day and a deep dive in the world of data. Our counter is increasing toward the 100daysofcode challenge. Yesterday we talked about the dirty data and it’s different types :smiling_face_with_tear:. Today we will discuss The open-data debate :star_struck::hugs:.

What is open data ? :thinking:

In data analytics, open data :open_file_folder: :unlock: is part of data ethics, which has to do with using data ethically. Openness refers to free access, usage, and sharing of data. But for data to be considered open, it has to:

  • :white_check_mark: Be available and accessible to the public as a complete dataset ❶

  • :white_check_mark: Be provided under terms that allow it to be reused and redistributed ❷

  • :white_check_mark: Allow universal participation so that anyone can use, reuse, and redistribute the data ❸

Data can only be considered open when it meets all three of these standards.

The open data debate: What data should be publicly :railway_car: available :earth_asia: ?

  • One of the biggest benefits :smiling_face_with_three_hearts: of open data is that credible databases can be used more widely. Basically, this means that all of that good data can be leveraged, shared, and combined with other data. This could have a huge impact on scientific collaboration, research advances, analytical capacity, and decision-making. But it is important to think about the individuals being represented by the public, open data, too. :innocent::blush:

Third-party data :3rd_place_medal: :slightly_smiling_face:

  • Collected by an entity that doesn’t have a direct relationship with the data :minidisc:. You might remember learning about this type of data earlier. For example, third parties might collect information about visitors to a certain website :desktop_computer: . Doing this lets these third parties create audience profiles :woman_technologist: , which helps them better understand user behavior and target them with more effective advertising.

Personal identifiable information (PII) :lock_with_ink_pen:

  • Data that is reasonably likely to identify a person and make information known about them. It is important to keep this data safe. PII can include a person’s address :house: , credit card :credit_card: information, social security number 🪪 , medical records :hospital:, and more.

  • Everyone wants to keep personal information about themselves private. Because third-party data is readily available, it is important to balance the openness of data with the privacy of individuals.

4 Likes

100daysofcode - Day 64

Hello amazing friends, 64/100, a daily increment and of course a lot of new interesting informations. Yesterday we discussed what open data is all about. Today we will move on to discuss the six problem types.

Six problem types ?? :thinking:

  • Data analytics is so much more than just plugging information into a platform to find insights 🫣. It is about solving :white_check_mark: problems. To get to the root of these problems and find practical solutions, there are lots of opportunities for creative thinking. No matter the problem, the first and most important step is understanding it. From there, it is good to take a problem-solver approach to your analysis to help you decide what information needs to be included, how you can transform the data, and how the data will be used. :hugs:

Data analysts typically work with :six: problem types :astonished:

  1. Making predictions : A company that wants to know the best advertising method to bring in new customers is an example of a problem requiring analysts to make predictions. Analysts with data on location :world_map: , type of media :cd: , and number of new customers :sunglasses: acquired as a result of past ads can’t guarantee future results, but they can help predict the best placement of advertising to reach the target audience.

  1. Categorizing things : An example of a problem requiring analysts to categorize things is a company’s goal to improve customer satisfaction. Analysts might classify customer service calls based on certain keywords or scores. This could help identify top-performing customer service representatives or help correlate certain actions taken with higher customer satisfaction scores.

  2. Spotting something unusual : A company that sells smart watches that help people monitor their health would be interested in designing their software to spot something unusual. Analysts who have analyzed aggregated health data can help product developers determine the right algorithms to spot and set off alarms when certain data doesn’t trend normally.

  3. Identifying themes : User experience (UX) designers might rely on analysts to analyze user interaction data. Similar to problems that require analysts to categorize things, usability improvement projects might require analysts to identify themes to help prioritize the right product features for improvement. Themes are most often used to help researchers explore certain aspects of data. In a user study, user beliefs, practices, and needs are examples of themes.

  4. Discovering connections : A third-party logistics company working with another company to get shipments delivered to customers on time is a problem requiring analysts to discover connections. By analyzing the wait times at shipping hubs, analysts can determine the appropriate schedule changes to increase the number of on-time deliveries.

  5. Finding patterns: :mag: Minimizing downtime caused by machine failure is an example of a problem requiring analysts to find patterns in data. For example, by analyzing maintenance data, they might discover that most failures happen if regular maintenance is delayed by more than a 15-day window.

4 Likes

100daysofcode - Day 65

Hello friends, 65/100, a lot of knowledge and fun :smiling_face_with_three_hearts::star_struck: . Everyday, a :new: increment and a lot of new topics to discover in this amazing e-world :earth_africa:. Today we will move from the data world, for a quick and gentle tour in the world of WEB3 and how it differ from the Web 1 & 2 :thinking:.

Web 1.0:Read only :no_mouth::smiling_face_with_tear:

  • Referred as Syntactic web or read only web is the era(1990-2004) where the role of a user is limited to reading information provided by the content producers.

  • There is no option given for user or consumer to communicate back the information to the content producers.

Web 2.0: Read and write :hugs:

  • Began in 2004 with the emergence of social media platforms to allow user-generated content and engage in user-to-user interactions instead of providing read only informations.

  • Web 2.0 also birthed the advertising-driven revenue model. $💰

Web 3.0: Read-Write-Own :tada::star_struck:

  • The internet required too much trust. That is, most of the internet that people know and use today relies on trusting a handful of private companies to act in the public’s best interests.

  • Web 3.0 is built upon the core concepts of decentralization, openness, and greater user utility.

What is Blockchain? :astonished:

  • Blockchain is a peer-to-peer network that does not require a centralized entity to work. The peers within the network can transact. To validate the transactions, each blockchain uses consensus algorithms. :grinning:

  • Blockchain’s other feature is immutability. It ensures that no data is changeable once it is written. Other key features of blockchain technology include transparency and trust. :innocent:

Stay tuned, for the coming posts, we will dive in the blockchain world, in deep way

4 Likes

100daysofcode - Day 66

Day++, the 66th is here, and our journey with the blockchain and WEB3 is running with a lot of fun and excitement :smiling_face_with_three_hearts::star_struck:. Today we will talk about the blockchain architecture, and how it work in details. 🫣

Blockchain Architecture :earth_africa:

  • A blockchain is divided into :three: primary parts:
    1. Blocks

    2. Transactions

    3. Consensus

01 - Blocks

  • A blockchain is composed of blocks. The blocks are stored in a linear fashion where the latest block is attached to the previous block. Each block contains data — the structure of the data stored within the block is determined by the blockchain type and how it manages the data :thinking:

  • Also, the first block of any blockchain is known as the Genesis block. Only the genesis block doesn’t have any preceding block.

  • Each block’s structure can be divided into three parts, including the data, hash, and the previous block hash.

02 - Transactions :star_struck: :star_struck:

  • A transaction takes place within the network when one peer sends information to another peer. It is a key element of any blockchain, and without it, there would be no purpose of using a transaction.

  • A transaction consists of information, including the sender, receiver, and value. It is similar to a transaction done on modern credit card platforms. The only difference is that the transaction here is done without a centralized authority.

03 - Consensus :face_with_hand_over_mouth:

  • The last important part of blockchain architecture is consensus. It is the method through which a transaction is validated. Each blockchain can have a different consensus method attached to it.

  • Where these algorithms offer a set of rules. It needs to be followed by everyone in the network. Also, to impose a consensus method, nodes should participate. Without any node participation, the consensus method cannot be implemented.

4 Likes

100daysofcode - Day 67

Hello friends, the 67th from 100 is here and our gentle dive in the world of web3 and blockchain is still running. :star_struck::smiling_face_with_three_hearts:. Today we will discuss the work process of the blockchain, in addition to its different types. :face_with_hand_over_mouth::innocent:.

How Does Blockchain Work :thinking:

  1. In the first step, a transaction is requested :crossed_fingers:. The transaction can be either to transfer information or some asset of monetary value.

  2. A block is created █ to represent the transaction. However, the transaction is not :x: validated yet.

  3. The block with the transaction is now sent to the network :globe_with_meridians:nodes. If it is a public blockchain, it is sent to each node.

  4. The nodes now start validating according to the consensus method used. In the case of bitcoin, Proof-of-Work (PoW) is used. 🫣

  5. After successful validation, the node now receives a reward :moneybag: based on their effort.

  6. The transaction is now complete. :white_check_mark:

Types of Blockchain Architecture :partying_face:

  • Public Blockchain :earth_africa:

  • Private Blockchain :closed_lock_with_key:

  • Federated Blockchain :key: :globe_with_meridians: (mix between public and private)

4 Likes