免费一级欧美片在线观看网站_国产一区再线_欧美日本一区二区高清播放视频_国产99久久精品一区二区300

代寫MTRN4010、代做MATLAB程序設計

時間:2024-03-19  來源:  作者: 我要糾錯



MTRN4010 –T1/2024 PROJECT 1

MTRN4010/2024 - PROJECT #1
Part A - Dead reckoning localization.
Part B - Basic processing of LiDAR scans / Mapping
LiDAR scans in the global coordinate frame. OOIs
detection.
Part C - Data association.
Part D - Applying sporadic deterministic localization: Mapbased Localization).
Part E- Applying deterministic permanent localization:
sensor data fusion via state observer.
Introduction:
This project is composed of five parts (modules). Those modules are intended to jointly
operate for allowing the platform (vehicle) to infer its surrounding context and to
continuously estimate its heading and its 2D position (i.e., to perform localization). The
sensors used to feed the processing modules are speed sensor, gyroscope, and LiDAR.
We solve the project gradually, by designing, implementing, and testing the individual
modules. Those are then integrated for implementing the full system.
In this project our solution is based on deterministic approaches, in which we apply
concepts we know from previous courses (e.g., from Math and MMAN3200) and some
other approaches we learn in MTRN4010 (e.g., optimization). Later, in Project 2 we will
extend these concepts by applying Stochastic Sensor Data Fusion.
The following section is a brief description of the project parts. In subsequent sections a
detailed description is available; in which the requirements and the marking criteria of
those project parts are given in detail.
Part A requires implementing a dead-reckoning process. This process generates
predictions of the platform’s pose, based on a kinematic model whose inputs are fed by
measurements from a speed sensor and a from an IMU gyroscope. Your program will
keep performing predictions of the platform’s poses during the full duration of the test by
properly applying the kinematic model.
Your module will operate in a “real-time” fashion, being called from the “Events loop” at
each event, always considering the elapsed time since the last previous event (“dt”).
You may adapt one of the provided example programs that we have released for
solving problem 4 in the tutorial work of week 2.
MTRN4010 –T1/2024 PROJECT 1
2
In addition, it is essential to periodically record the estimates of the platform's pose at a
specified sampling rate. These recordings should be stored in a buffer and plotted at the
end of the playback session. This enables you to evaluate the accuracy of your results.
In addition to the previously mentioned capabilities, your module must allow the user to
define certain parameters: an offset for manually compensating gyroscope bias and a
gain to allow calibrating the speed sensor. A detailed description of Part A is in the
section about detailed descriptions.
Part B
Feature extraction. In this section you are required to process LiDAR scans for inferring
the presence of “Objects of Interest” (OOIs), which for us are those objects that seem to
be navigation landmarks.
Each time a new LiDAR scan is available (at a “LiDAR event”) your module will perform
the following processing:
1) “Clustering” (a.k.a. “segmentation”). You will process the raw scan to obtain a list
of clusters and select those that seem to be OOIs.
2) Visualization in LiDAR1’s CF (in Cartesian representation), of all the scan points
and the centers of the detected OOIs.
(note: you are not required to do it for scans from LiDAR2)
3) Visualization in GCF of all the scan points and the OOIs centers.
(you will use the same figure already being used for part A)
In this case, you are required to perform this processing and visualization for
scans from both LiDAR sensors.
For the transformation to GCF you will use the estimates of the platform’s pose
that you obtain from the module implemented for part A.
If that module seems to be operating unsatisfactorily, you will use Ground Truth
poses, to verify the consistency of your Module B.
This means that at each LiDAR event your module will process the arrived
measurement (i.e., a LiDAR scan) for performing the processing required in subitems 1-
3. In some of them you may also need to use the current pose estimates which are
maintained and provided by module A (which means that Module-A must be enabled in
the events loop.)
Keep in mind that we specify upper limits for the processing time required by your
module, so that inefficient implementations may lose marks. We give details about how
to measure processing times (see appendix section “Measuring processing time”).
Part C Data Association
MTRN4010 –T1/2024 PROJECT 1
3
This module focuses on performing “Data Association” (“DA”. For this to be feasible,
part A and part B need to be successfully working before attempting part C.
DA is necessary for other modules of the project (D and E). Performance of your DA
processing is demonstrated visually, for which you will implement some dynamic plots.
Your module must be able to deal with missing detected poles (“false negatives”) and
with unexpected poles (“false positives”), those may occur due to sensor limitations,
occlusions, presence of intruders in the area of operation, incompletely surveyed maps,
etc. Refer to section “Data Association” in the appendix, for more details.
Part D requires implementing a classic approach for localization, based on processing
measurements from sensors such as the LiDAR, and on exploiting a navigation map
(map of known/surveyed landmarks). The concept about localization based on a map of
landmarks is similar to that of the GPS example, discussed in Lecture 3. However, in
our case, we use a LiDAR to detect navigation landmarks and to measure their
positions in the local CF of the sensors (in place to measuring distances to visible
satellites). Your work in tutorial 3, may help in this matter.
In this part of the project, you will be free to decide how to solve the task, which may
require solving a set of equations, for which your knowledge of Mathematics may be
sufficient. Alternatively, you may apply optimization (discussed in lecture 3), or other
approaches you may know from other courses. This module must be able to generate
pose estimates at LiDAR events in which enough useful OOIs are available. Part D is
based on the outputs of modules B and C (but not of A directly). If in certain areas of
operation, the information provided by B and C is not sufficient, your module will report
“no solution”, via a flag which is intended to indicate existence (or not) of solution at that
time. (For that reason, we used the term “sporadic localization”, as we cannot assume it
would be permanently available.)
Part E requires the implementation of a deterministic observer, for fusing, in real-time,
all the sources of information that we have (i.e., sensors’ measurements and models
and/or outputs of any of the other modules) for achieving permanent vehicle
localization. You decide which sources of information and how to exploit those. The
state observer will be like those you used in MMAN3200; however, our problem here
cannot be solved directly using that LTI oriented approach, but a similar one. Module E
should make good use of the outputs generated by Module D. Ideas about how to
propose this state observer will be briefly discussed in lecture time. We say that it is
“permanently available” because this observer would be able to provide, nominally,
estimates at any required time.
Details about the required solutions for A, B, C, D and E can be found in the extended
description, included in this document.
Deadline for submission, of the full project, is Friday Week 5, 23:55 + 2 days.
Submission will be via Moodle. Details about how your program files must be organized
(names, author details) are specified in an extra document, to be released in week 3.
MTRN4010 –T1/2024 PROJECT 1
4
Marking criteria
Project 1, if 100% successfully completed, provides 23 points of the course final mark.
In addition to the submission of your implementation, you need to explain and show, to
a member of the teaching staff, how your submitted program does perform. Both
submission and demonstration are necessary conditions. A project which is not
submitted, or that is not demonstrated, will get no marks.
The demonstration will take place during week 7, in your weekly lab_tut session. It will
be based on the submitted material (which is to be kept, securely, in the Moodle
submission repository). Other versions of your solution that you may produce cannot be
used in the demonstration.
Note: students who are enrolled in a session that runs on Friday will be offered an
alternative day (as Friday on Week 7 is holiday)
Your finally achieved project mark depends on the performance of your implementation
and on the demonstration of the project, and on a “knowledge factor” about the project
(variable Q in the marking equation, below).
The relevance of the implementation and demonstration of the project parts is as
follows:
Part A: 18% (of the project mark, i.e.,0.18*23 marks)
Part B: 21%; Part C: 19%
Part D: 21%; Part E: 21%
The addition of the values obtained in each part is the “Submitted and Demonstrated
Project Mark”. The factor Q, which is used for calculating the final mark of Project 1, is
obtained based on your performance answering questions, during the demonstration,
and/or via a quiz if needed. Factor Q is represented in scale [0:100]
The influence of Q on the overall project mark can be seen in the following formula.
Overall Project Mark = [Submitted and Demonstrated Project Mark] *
(0.6+0.4*Q/100)
For instance, if you fail in answering all the questions, your Q factor will be 0, which
means you would get 60% of the achieved marks of your submitted/demonstrated
programs.
Questions/doubts: Ask us via Teams Forum, or by emailing the lecturer
(j.guivant@unsw.edu.au)
MTRN4010 –T1/2024 PROJECT 1
5
Detailed Specifications and detailed marking criteria.
The modules that you will implement for solving the five parts of this project, are
expected to work in simulated real time (from a data playback session). The modules
are called from the individual iterations of the “events loop”, during playback sessions.
The “event loop” was introduced during week 2, for solving one problem of the tutorial
work (Problem 4).
You were required to prepare for this project by solving those tutorial problems. We
assume that at the time you are reading this document, you have already been working
on those tutorial problems. In addition, the example programs that have been provided
for week 2 tutorial work can be modified by you, for implementing your project 1 without
needing to start from scratch.
Part A – Localization based on Dead reckoning
estimation of the vehicle position and heading.
Implement a module for estimating the platform pose, based on a kinematic model, and
on the measurements of the speed (longitudinal velocity) and the heading rate (from
gyroscope).
This process will be called from the events loop, at each sensor event, always being
fed with the last updated values of speed and gyro measurements, and by using a “dt”
equal to the elapsed time since last processed event. You will always generate
estimates of the platform’s pose, at each event.
In addition to maintaining the estimates of the platform’s pose, you will record those
estimates in a buffer defined by you for that purpose (this is like what we have done for
solving some tutorial problems during weeks 1 and 2). However, you will record the
values of those estimates only at the times of the LiDAR events.
Based on those recorded samples, you will be able to validate your results (accuracy of
your estimates/predictions) by comparing your estimated trajectory with that of the
provided ground truth (GT).
At the end of the playback session, you will calculate the errors between your estimated
trajectory and that of the GT, and then you will plot those discrepancies in a separate
figure.
You will also plot the recorded positions in a figure dedicated to show information in the
Global coordinate Frame (GCF). That figure may be the one produced in one of the
example programs. You may reuse that capability of the example program.
MTRN4010 –T1/2024 PROJECT 1
6
Marking criteria for part A.
We will verify the accuracy of your solution for part A, when used with a “noise-free”
dataset (such as “aDataUsr_007b.mat”). We will consider how your program performs
during the full trip.
The contributions to the marks of Part A (expressed as % of Part A), are the following:
Item 1 (35%). Accuracy, in terms of position of your estimates, when compared to the
GT. The discrepancy must be lower than 2 cm, always (full trip).
Item 2 (30%). Accuracy, in terms of heading, of your estimates, when compared to
those of the ground truth. The absolute value of the discrepancy must be lower than 1
degree, always.
For the purpose of inspecting the performance of the estimates for evaluating items A1
and A2, you will plot the discrepancy in position and in heading, between ground truth
and your pose estimates. You will plot those discrepancies in a figure. One subfigure
(using subplot) for showing discrepancy in position
( ) ( ) estimated GT position i position i −
against index i. A second subfigure will be used for showing the absolute value of the
discrepancy in heading estimates, plotting
( ) ( ) estimated GT heading i heading i − in degrees.
Item 3 (15%): Functioning capability for allowing users to manually set the speed sensor
gain and the gyro offset.
Item 4 (20 %). Visualization in Global CF.
Required features (for item 4):
F1) Before the events loop starts: Static features (Map of walls and poles),
F2) after the events loop is completed, you will the recorded estimated path.
F3) Dynamic plot: Estimated current pose of the platform (position and
heading), animated dynamically, at the rate of the LiDAR events (you may use the API
function for that purpose, if you want).
F4) Dynamic plot: Actual current pose of the platform (GT position and
heading), animated dynamically, at the refresh rate used in F3.
These features (F1 to F4) are relatively simple, for that reason each of those items will
be considered accepted ONLY if it is “free of glitches”. Rejected features will result in a
mark reduction of 10%.
This means that if all features are accepted, the mark will be markA3=20%. If one
feature were not accepted (but the rest were ok), then item3 would have a mark
markA3=20-10=10%. In the worst case markA3 would be restricted to be=0 (i.e., would
not turn to be negative).
Inefficient (e.g. too slow brute force) visualizations will not be accepted as “free of
glitches”. Consult the leading demonstrator in your session, to have a brief look at your
results (e.g. during week 4).
MTRN4010 –T1/2024 PROJECT 1
7
Visualization updates must be performed ONLY at the LiDAR events, even if you are
not processing LiDAR data.
Part B –Processing LIDAR data. Mapping from local
to global CF. Detection of OOIs (Objects of Interest).
Implement a module for processing individual scans, for detecting certain types of
objects (from the raw measurements provided by the sensors).
This processing will be performed at each LiDAR event.
Part B is divided into subsections (B1, B2, B3), which are described here.
B1) For each LiDAR scan, parse (decode) the data to extract ranges and intensities.
In addition, express the scan points in cartesian coordinate frame.
For those scans from LIDAR1, in a figure representing LiDAR1’s CF, show dynamically
all the scan points, in Cartesian. In the same figure, show using a different color, the
points that do have high reflectivity.
You are NOT required to implement similar visualization for scans generated by
LiDAR2.
B2) At each LiDAR event, detect objects of interest. We consider as OOI any object that
seems to be a reflective pole, i.e., that has a defined size (apparent diameter smaller
than 20cm), and that it appears not to be opaque (i.e., at least one of the pixels that
constitute that cluster is highly reflective).
Refer to appendix sections “LiDAR scans” and “OOIs / Landmarks” for more details.
A LiDAR scan usually captures multiple clusters of points, being some of them
associated to our OOIs/landmarks (which are small, i.e., are poles having diameters
lower than 20cm). In addition to considering the apparent size of a cluster of points, for
the sake of increasing certainty, you will need to verify that the cluster is “sufficiently
brilliant”. If that condition is satisfied, you will consider that cluster to be an OOI (i.e., a
potential map landmark). For each segment (cluster of points) that seems to compose
an OOI, you need to estimate its center of geometry (CoG) and three other properties.
The output of your function will include a list with the positions of CoGs of the detected
OOIs, in that scan. Those CoGs are expressed in Cartesian representation, in the
LiDAR’s coordinate frame. The additional features of the cluster are its approximate
diameter, its brilliance (% of that cluster’s pixels that are highly reflective) and the
number of points that constitute that cluster.
You can solve this part of the project by implementing your approach from scratch, or by
exploiting the provided API, or by using a third-party implementation. If you use a thirdparty solution, you will need to clearly indicate the source of that tool, and that source
MTRN4010 –T1/2024 PROJECT 1
8
must be unrelated to UNSW. This item is required to process a LiDAR scan, in less
than 10 milliseconds, in a PC such as those in Lab212/213). That processing time
excludes any extra time for refreshing plots when you test your solution. Read the
appendix section “Measuring processing time” for details about the way to measure
elapsed times.
Solutions that frequently take more than 10ms to process this task will be considered
inadequate, consequently losing 50% of the marks of this part due to that lack of
compliance. This processing time specification is for the lab computers (L212/213).
This processing time considers the time consumed for the inference of all the OOI in a
LiDAR scan. It does not include the processing time spent for plotting or any other
visualization of the results.
B3) Extra visualization capabilities.
B3.1) Information to be shown in LiDAR1 CF (in Cartesian: Centers of detected OOIs
inferred from LiDAR1 scan. This item must share the figure defined and used in B1.
B3.2) Information to be shown in Global CF
1) All OOIs items (from LiDAR1 and LiDAR2) detected in item B1, expressed in
GCF (you choose the style).
2) LiDAR1 scan points, expressed in GCF (use blue dots).
3) LiDAR2 scan points, expressed in GCF (use green dots).
In addition, to those requirements (in B1, B2 and B3), the total processing time for
treating the LiDAR event, including all required visualization, must take less than 40ms.
Marking criteria for part B.
Contributions to the marks of Part B (expressed as % of Part B)
Item B1 (05%). This item must be free of glitches. It can be mostly based on the
provided examples.
Item B2 (80%). OOI detection efficacy. By visual inspection, at certain times during the
playback session, the demonstrator will inspect the results of your OOI detector, to
verify if those OOIs you obtain are mostly consistent with the raw scans. The
demonstrator will consider two rules. Rule 1: The positions of the generated OOIs
centers must be close enough to those clusters of raw points that common-sense
dictates. Rule 2: Evident segments/clusters of points, in the raw scans, that seem to be
OOI (from common sense), must be detected to be OOIs. Few and sporadic misses will
be ignored by the demonstrator. The demonstrator will pause your program at certain
times to inspect the information you show in the local CF (from item B3). This
MTRN4010 –T1/2024 PROJECT 1
9
procedure will be repeated at least four times; based on that, the demonstrator and you
will get a ratio of success (“k”). The mark for your part B1 will be markB1 = 80*k/100.
Item B3 (15%). Visualization. This capability is necessary for showing results. It is
composed of two subitems (B3.1 and B3.2), having relevance 7.5% (of B) each one.
These items must be free of glitches to be accepted. If item B3.1 is not working, your
part B2 will not be marked.
We specify an upper limit on the computation time of the LiDAR processing. The spent
time for the data processing must be less than 15ms. The limit to the total processing
time (which includes visualization operations as well) is defined to be 50ms.
If any of those limits is frequently exceeded, a penalty will be applied: 50%-mark
reduction.
Part C – Data Association
Implement the capability to perform Data Association (DA) between OOIs (detected by
module B) and Landmarks from the navigation map.
When we detect a set of OOIs, we need to infer which ones of those seem to
correspond to landmarks of the navigation map. For obtaining that correspondence we
perform a DA process. You are required to implement a basic DA module and to apply
it to the OOIs detected by module B. Read appendix section “Data Association” for
details.
In addition to the DA implementation, certain visualization resources will be
implemented, as part of the requirement in this section. In the figure associated to the
global CF (the figure also used in parts A and B), you will include some graphic
capabilities for showing the set of matched pairs, {OOI / Landmark}, in GCF. You may
use (free of penalties) some helper function, from the provided API, for that purpose, or
implement your own visualization of associated pairs.
The lecturer will show his DA approach working in videos and will also show details
about how to use the helper function. You may like to use that way, or you may try other
ways to clearly show, dynamically, the DA results. In any case, DA visualization must be
free-of-glitches.
The performance of you DA implementation will be evaluated in terms of apparent % of
success. For that purpose, you (and us) will apply a visual verification, based on your
visualization module.
The test of your DA solution will be performed under adverse conditions, in which OOIs
(in GCF) and landmarks will not match well. For that purpose, we will exploit the
capability of your Module A to add an offset to the gyroscope measurements, resulting
in a gradually increasing drift in the pose estimates. Your DA process must work until
the discrepancies between OOIs and Landmarks do exceed certain tolerance (1m)
If your DA visualization module were not properly working, you would not be able to
show your DA results and, consequently, no marks would be awarded to it.
MTRN4010 –T1/2024 PROJECT 1
10
Marking criteria for this part.
Item C.1 (20%). Visualization of results from DA. In the figure related to the GCF, in
which you dynamically show all OOIs and statically show Landmarks, you will also show
the set of associated couples {OOI/Landmark} (that resulted from your DA process),
indicating each associated couple, so that users can appreciate those associations.
This component must be free-of-glitches, and it will be accepted only if you have also
implemented the actual DA process (C.2).
If item C.1 does not work, you will not be able to show item C.2.
Item C.2 (80%). Efficacy of the DA process. The demonstrator can stop your program at
several times during the demo to inspect the DA performance.
The demonstrator will pause your program at least three (3) times throughout the whole
simulation. We will count the number of successful and the number of missing DA pairs.
A factor k = [ number of successful DA matches / total number of evident matches] will
be evaluated. The mark assigned to this item will be 80*k.
The evaluation will be performed in realistic conditions, in which the estimates of Part A
would not be so good (We will achieve that by simulating a small bias in the gyroscope
measurements).
We specify an upper limit on the computation time of the DA. The spent time for the DA
processing must be less than 5ms. The limit to the total processing time (which includes
visualization operations as well) is defined to be 20ms.
If any of those limits is frequently exceeded, a penalty will be applied: 50%-mark
reduction.
Part D – Sporadic Deterministic localization
Based on the detected OOIs that have been associated to Landmarks (we refer to those
OOIs as useful OOIs), Module D will estimate the platform pose just based on the local
positions of the useful OOIs, and on the positions of their associated landmarks.
You decide the approach for solving this problem.
You must provide estimates only when you have 3 or more useful OOIs.
Record (in a buffer) the estimated pose. If you have fewer than 3 useful OOIs, you will
record the flag value [0;0;0] (no solution). You may try accepting just 2 useful OOIs, but
we do not require it.
The relevant part of the processing for implementing this module must be encapsulated
in function named EstimatePoseD, whose syntax must be:
[epose, valid] = EstimatePoseD( Useful_OOIs, AssociatedLandmarks,
extraParemeters);
In which epose is the result of your calculation, and valid is a flag. If a valid solution is
generated, then epose will contain that solution, and the flag valid will be =1. I no
solution is generated, then you will return epose = [0;0;0] and valid=0.
The input arguments are:
MTRN4010 –T1/2024 PROJECT 1
11
Useful_OOIs : list of useful OOIs
AssociatedLandmarks: List of landmarks matched to the list of useful OOIs.
extraParemeters: Any parameters you may need in your calculations.
You may animate this estimated pose in the same way you did for showing GT and
Kinematic based estimates.
Marking criteria for this part.
Item 1 (05%) After the end of the events loops, plot, in a separate figure, the
discrepancies between estimates and ground truth, for all LiDAR events, in terms of
position and in term of heading.
Item 2 (25%). Based on the plots presented in item 1, discrepancies in position must be
lower than 20cm.
Item 3 (25%). Based on the plots presented in item 1, discrepancies in heading must be
lower than 3 degrees.
Item 4 (20%). The processing time of your pose estimator must be lower than 8ms.
Item 5 (20%). Brief report (up to 3 pages), describing your approach.
We specify an upper limit on the computation time of Module D. The computation time
must be less than 60ms. If this limit is frequently exceeded, a penalty will be applied:
50%-mark reduction.
Part E – Permanent Deterministic localization
Based on the available results from the rest of the modules, implement a state observer,
for estimating the platform pose.
This estimator must operate adequately, even under the presence of noise in
measurements, including a small bias in the gyro measurements.
Your observer must have a structure like those classic observers seen in MMA3200, in
the sense that the observer will exploit the state equation (our kinematic model), and
also a correction term. You will propose and implement this state observer.
Estimates of this observer are required to have discrepancies (respect to GT pose)
lower than 3 cm in position, and 3 degrees in heading, always, even under presence of
disturbances such as a small gyroscope bias (e.g. conditions under which an open loop
estimator would fail).
The marking criteria of this part will be mostly focused on the performance of your
solution and on a brief report (page limit = 2) in which you will explain your approach.
Details about the conditions under which your solution must achieve those accuracy
specifications will be released on Monday week 4, jointly with details about the required
report format.
MTRN4010 –T1/2024 PROJECT 1
12
APPENDIX
Using datasets via API
We test out implementations via playback sessions, in which we read sensors’
measurements following the same timing in which those had occurred. In a playback
session we are reading the data (measurements) which had been previously recorded
in experiments or in simulations.
As we may have multiple experiments, we encapsulate them in individual datasets.
Thus, we can test our solution using any dataset we want, to recreate what would have
happened if our solution had been used during that experiment, in real-time.
The API offers functions to access datasets, such as a function for selecting/opening a
dataset, and a function for chronologically reading measurements of the currently
selected dataset, and a few more useful functions, called helper functions, to simplify
our implementations for solving the projects. You are expected to read the example
programs offered with the release of this project, and also with the example program
you used for solving problem 4 in tutorial 2.
Definition: “Free of glitches”
In our project, we require certain items to be implemented “free of glitches”. That is
required when the complexity of the item is sufficiently low, and for which the student is
assumed to have good skills and knowledge for solving it. In addition, these items are
usually critical and strictly necessary for allowing to solve other project parts; and these
items must work well for those purposes. These items do provide marks to the overall
project mark, and they are usually marked as OK (accepted) or not OK (not accepted),
not allowing partial marking. Failure to solve these items may result in not being able to
solve or to show the operation of other project items.
Typical examples of this type of item are those related to visualization of results and
which are usually based on provided example code.
Serious sources
For certain parts of the project, in we explicitly mention that you are allowed to use
resources which are offered by third parties, publicly, e.g., from MATLAB or other
repositories (e.g. GitHub) or papers. If you do that, you always need to mention, in your
code, the source of the tool/resource that you are using. In addition, you would mention
the purpose of using it. You are not allowed to use pieces of code from other students in
the course this year or from previous years.
LiDAR scan
A LiDAR sensor provides measurements in the form of scans. A scan is composed by a
set of individual range/intensity measurements, in which each of them is associated to
an azimuth angle. In our case, LiDARs do scan from [-75 to +75 degrees, azimuthally, in
steps of 0.5 degrees. Due to that, the image taken by the LIDAR is composed by 301
ranges that are associated to 301 consecutive angles. In addition to the distance
(range), the individual measurement provides information about the intensity of the
MTRN4010 –T1/2024 PROJECT 1
13
reflection, which can be associated to the reflectivity of the reflecting surface. In our
programs, the measurements are provided as arrays of 301 uint16 integers. Of the 16
bits, the range is given by the lowest 14 bits, and the intensity (of the reflection) is given
by the highest 2 bits. Range is expressed in cm. Intensity is used in relative terms. We
consider that a surface, at the measured spot, is “not opaque” if its measured intensity
is higher than zero. Usual surfaces are opaque; poles used as landmarks are covered
with a reflective layer, so that readings from those surfaces usually appear with high
reflection intensity. In some of the provided examples we show that property in our
dynamic plots.
OOIs / Landmarks
We use certain infrastructure for helping to infer the platform pose (position and
orientation). It is composed by easy to detect objects. Those objects, in our case, are
poles of 10 cm diameter and whose surfaces are good for reflecting light (e.g. that IR
light emitted by Lasers/LiDARs). Many of those poles are deployed around the area of
operation of our platform, and their positions were surveyed and registered in a
“navigation map”. We refer to those useful poles as “Landmarks”. Landmarks can be
used for helping map-based localization, e.g. by basic approaches such as triangulation
or trilateration or multilateration, and by other more advanced and powerful approaches.
The characteristics of the poles we used as landmarks make the pole detection easier.
When we process a LiDAR scan, each scanned pole does appear as a small cluster of
points of which at least one of the points does produce intense reflection. We exploit
that pattern to infer if a cluster of points has good chances of being a landmark. In that
case we say that that cluster is an OOI (Object of interest). So, we usually pay attention
on OOIs, and we apply additional processing on them to infer with high certainty if they
actually are landmarks and for also inferring their “identities” (that job is done in Part C
of the project).
For us, an OOI is any cluster (of consecutive points) that has the following
characteristics:
1) has an approximate size that is smaller than 20cm.
2) It has, at least, 1 highly reflective point.
If both conditions are satisfied, we say that that cluster is an OOI.
Data Association (DA). How do we perform it in Part C?.
In our context of application, DA means the process of obtaining matches between
elements of two sets. For instance, for each element of set A which element of set B
does correspond to it. In Project 1 we apply DA to treat the following case. “I can see
several OOIs now; I know that many of them may be Landmarks. For each of the OOIs
that I can see now, which landmark is in the navigation map is it?
For solving the DA We will apply the following approach. Given a set of OOIs, whose
positions in the LiDAR CF we know, we express those positions in the GCF. We base
that transformation on the current estimates of the platform pose; those estimates of the
pose are maintained by the Module implemented for solving part A (or in part E) of the
project. When we express those OOIs in GCF, some of them will usually appear “close
enough” to Landmarks. If an OOI appears to be close to a landmark (e.g. having a
distance < 1m), then we will say that that OOI is associated to that landmark.
MTRN4010 –T1/2024 PROJECT 1
14
The logic of this DA procedure is that landmarks are sparsely distributed in the area of
operation of the platform, and thus there is enough separation between any possible
pair of landmarks, to be safe applying the proposed DA rule.
However, we need to consider some additional rules/assumptions.
1) Not every OOI is associated to a landmark (yes, we may detect clusters that
satisfy the OOI specifications, but, still, they may correspond to objects that are
not included in the navigation map.
2) Not all the landmarks are visible to the LiDAR (that depends on the pose of the
LiDAR, occlusions, sensor limited range, etc.)
The relation between our DA process and the pose estimation process is a “chicken/egg
case”. DA does depend on the estimates of the platform’s pose, and the estimation
process of the pose requires the DA output. However, in the way we use and perform
those processes, the overall process is successful, because we always maintain
accurate enough pose estimates, to guarantee a successful DA, and just sporadic but
frequent enough successful DA’s do allow generating proper corrections in the pose
estimation process.
Measuring processing time
If we want to estimate the processing time of some section of code, we may use
functions tic() and toc(). Those are not accurate for very fast sections of code but are
usually accurate enough for measuring processing times larger than few milliseconds
We will consider the times measured by this approach in an averaged fashion, we will
not pay attention to certain spikes that may usually occur due to operating system
multitasking matters, or to MATLAB virtual machine matters.
A serious way to get details about processing times of different parts of our MATLAB
programs is by using the profiler tool (profile). However, we simply ask you to use tic()
and toc() each time you process a LiDAR event.
You can easily measure the processing time for just the processing component of it,
and also the processing time of the full event, i.e. including the plotting part. In any
case, always excluding the call to the pause function.
LiDARs installation on the platform
Each LiDAR sensor is installed at a given position and having an orientation, in the
platform’s coordinate frame.
Those parameters (position and orientation) are provided in the dataset and can be
read using one of the API functions. How to use it, can be inferred from the example
program, in which we use that function as follows: “UsefulInfo=MyApi.b.GetInfo();”
and then we use the returned variable in subsequent parts in the program.
% details about position and orientation of the LiDARs, in the platform.
Lidar1CFG = UsefulInfo.LidarsCfg.Lidar1 ;
% installation info about Lidar#1
Lidar2CFG = UsefulInfo.LidarsCfg.Lidar2 ;
% installation info about Lidar#2
MTRN4010 –T1/2024 PROJECT 1
15
Additional explanation can be found in the videos offered in Moodle, in which we
actually use these parameters for certain necessary transformations.
API helper function for detecting salient clusters (optional use).
The API offers a function for detecting salient clusters, in LiDAR scans.
Any small cluster of points that is salient is detected. A salient cluster is a set of
consecutive points that constitute an apparently continuous segment that seems closer
to the LIDAR than its adjacent surfaces (which are usually background context). Poles
can be inferred in this way. You may use this API function to obtain a list of small salient
clusters, and from that list of clusters, by considering only those not being opaque,
obtain a list of OOIs.
Usually when you initialize your program, you can get a function handle to that API
function in this way:
% MyApi = APImtrn4010_v01();
% …
GetSalientClusters = MyApi.b.FindSmallSegmentsFS;
% and then use it in this way
[iiab,properties,nc] = GetSalientClusters(Ranges,dL ,dW);
In which Ranges is an array of 301x1 ranges. dL is the minimum variation of range to
assume a transition between adjacent segments of points. dL is specified in the same
units used by Ranges; in our case dL should be around 0.8m, so that if Ranges is in
metres, dL should be 0.8.
dW specifies a maximum allowed width of the cluster (in the same units used in
Ranges). We may propose 0.2 or 0.3 for our goal of detecting cluster of sizes <20cm or
<30cm.
Although the function calculation is generic, it has been internally set to assume that the
scan angular resolution is 0.5 degrees, and that Ranges is an array 301x1.
The function relevant outputs are:
nc; number of detected compatible clusters.
iiab: indexes indicating the first and last pixel of each cluster. iiab is a matrix of size nc x
2 (nc rows, 2 columns)
For instance, if nc=10, it means there are 10 detected clusters. So, for k=1 to nc, cluster
number k does start at index iiab(k,1) and ends at index iiab(k,2);
It means that those indexes do indicate the starting and ending points of each cluster.
Based on that you can infer the subset of points that constitute each cluster, allowing
you to infer if a cluster is opaque or not, approximate centre of cluster, etc.
The output variable properties does contain the estimated centres of the detected
clusters, in polar representation. It also contains their approximate widths.
So, for cluster number k:
properties(k,1): is the cluster width (expressed in the same units used by Ranges).
MTRN4010 –T1/2024 PROJECT 1
16
properties(k,2): Estimated range to the cluster’s centre.
properties(k,3): Estimated angular position of the cluster’s centre.
Alternatively, to the information provided in properties, you may estimate those
properties by using the information provided in the output variable iiab.
The lecturer will upload a video, explaining how to use this API function.
Finally, we remark that this function is a helper function that you may use, if you want.
There is not penalty for using it. However, you are free to implement your approach, or
to use some valid third-party solution for the same purpose.
請加QQ:99515681  郵箱:99515681@qq.com   WX:codehelp

標簽:

掃一掃在手機打開當前頁
  • 上一篇:COMP2207 代做、R 程序設計代寫
  • 下一篇:CSC 172代寫、Java/C++程序設計代做
  • 無相關信息
    昆明生活資訊

    昆明圖文信息
    蝴蝶泉(4A)-大理旅游
    蝴蝶泉(4A)-大理旅游
    油炸竹蟲
    油炸竹蟲
    酸筍煮魚(雞)
    酸筍煮魚(雞)
    竹筒飯
    竹筒飯
    香茅草烤魚
    香茅草烤魚
    檸檬烤魚
    檸檬烤魚
    昆明西山國家級風景名勝區
    昆明西山國家級風景名勝區
    昆明旅游索道攻略
    昆明旅游索道攻略
  • 短信驗證碼平臺 理財 WPS下載

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 kmw.cc Inc. All Rights Reserved. 昆明網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    免费一级欧美片在线观看网站_国产一区再线_欧美日本一区二区高清播放视频_国产99久久精品一区二区300
    国产精品久久一卡二卡| 日韩成人午夜精品| 久久综合资源网| 欧美日韩中文字幕精品| 成人av中文字幕| 麻豆精品一区二区三区| 亚洲自拍欧美精品| 国产清纯在线一区二区www| 日韩午夜精品电影| 欧美精品1区2区3区| 91国产福利在线| 一本大道av一区二区在线播放| 国产美女娇喘av呻吟久久| 美女久久久精品| 免费人成在线不卡| 午夜视频在线观看一区二区三区| 亚洲免费视频成人| 自拍av一区二区三区| 中文字幕第一区| 国产精品日韩精品欧美在线| 久久网站最新地址| 久久日韩粉嫩一区二区三区| 精品少妇一区二区三区日产乱码| 欧美高清性hdvideosex| 欧美一区二区视频观看视频| 欧美老女人第四色| 日韩欧美在线网站| 久久久久久久av麻豆果冻| 久久久精品中文字幕麻豆发布| 久久久www成人免费无遮挡大片| 久久新电视剧免费观看| 欧美国产亚洲另类动漫| 国产精品欧美一区喷水| 亚洲精品乱码久久久久久 | 欧美成人艳星乳罩| 久久久久久一级片| 国产精品久久久久久久久快鸭 | 精品久久久久久久人人人人传媒 | 国产精品视频一二| 亚洲毛片av在线| 日韩在线一二三区| 国产真实乱对白精彩久久| 成人国产一区二区三区精品| 色综合天天综合色综合av| 欧美在线综合视频| 日韩视频不卡中文| 中文字幕第一区二区| 亚洲国产婷婷综合在线精品| 黄网站免费久久| 92国产精品观看| 欧美一区二区三级| 中文字幕在线一区免费| 午夜激情综合网| 成人av免费在线| 精品少妇一区二区三区日产乱码| 18成人在线视频| 一区二区三区毛片| 亚洲成av人片观看| 亚洲成av人片一区二区梦乃| 亚洲风情在线资源站| 亚洲精品一二三| 日韩码欧中文字| 激情av综合网| 欧美日韩国产大片| 国产精品国产三级国产a | 高清国产午夜精品久久久久久| 欧美体内she精视频| 欧美激情在线一区二区三区| 日韩av一二三| 欧美日韩亚洲综合一区| 国产精品亲子乱子伦xxxx裸| 麻豆国产精品一区二区三区 | 美女被吸乳得到大胸91| 欧美在线制服丝袜| 亚洲男人天堂av网| av成人动漫在线观看| 国产丝袜美腿一区二区三区| 看片网站欧美日韩| 日韩一区二区在线观看视频| 亚洲成人av在线电影| 欧洲激情一区二区| 亚洲综合清纯丝袜自拍| 日本高清成人免费播放| 亚洲精品乱码久久久久久| 91视频精品在这里| 伊人一区二区三区| 在线免费视频一区二区| 亚洲一区二区四区蜜桃| 欧美亚洲综合在线| 婷婷亚洲久悠悠色悠在线播放| 欧美熟乱第一页| 日韩精品一级中文字幕精品视频免费观看 | 亚洲一区二区三区视频在线| 欧美伊人精品成人久久综合97| 亚洲国产日韩综合久久精品| 欧美日韩国产高清一区二区| 日本不卡视频在线观看| 欧美mv和日韩mv的网站| 国产精品18久久久久久久久| 国产精品少妇自拍| 91免费精品国自产拍在线不卡| 亚洲欧美乱综合| 9191久久久久久久久久久| 久久99精品网久久| 国产精品久久久一本精品| 色噜噜夜夜夜综合网| 日本在线播放一区二区三区| 精品对白一区国产伦| eeuss鲁片一区二区三区| 一区二区三区不卡在线观看| 91麻豆精品91久久久久同性| 国产精品羞羞答答xxdd| 一区二区三区久久| 欧美精品一区二区久久久| 91麻豆免费观看| 美女脱光内衣内裤视频久久网站 | 波多野结衣亚洲| 日本女优在线视频一区二区| 欧美韩国日本综合| 欧美放荡的少妇| www.成人在线| 高清国产午夜精品久久久久久| 久久久久久久久久久99999| 国产综合色视频| 国产精品久久久久久久久免费樱桃 | 激情都市一区二区| 国产欧美日韩在线| 51精品秘密在线观看| 不卡的av网站| 久久精品av麻豆的观看方式| 亚洲天堂2016| 国产欧美一区二区三区鸳鸯浴| 欧美精品在线一区二区三区| 成人动漫视频在线| 国产精品系列在线播放| 青青草原综合久久大伊人精品| 亚洲精品日日夜夜| 欧美极品另类videosde| 精品国产一区二区在线观看| 欧美人狂配大交3d怪物一区| 国产一区二区三区久久久| 亚洲精品水蜜桃| 国产精品久久久久影视| 精品国产乱子伦一区| 777奇米四色成人影色区| 日本电影亚洲天堂一区| 99久久久久久| av电影一区二区| 成人激情校园春色| 成人爱爱电影网址| 成人av动漫网站| gogogo免费视频观看亚洲一| 国产成人av一区二区三区在线 | 国产精品视频一二| 国产精品麻豆网站| 亚洲色图第一区| 亚洲欧美日韩在线| 亚洲激情欧美激情| 亚洲成人动漫av| 日本美女一区二区| 国内不卡的二区三区中文字幕| 激情五月婷婷综合网| 国产91丝袜在线播放九色| 成人的网站免费观看| 欧美亚洲国产一卡| 欧美岛国在线观看| 国产情人综合久久777777| 亚洲欧美日韩在线不卡| 一区二区成人在线| 久久综合综合久久综合| 国产成a人亚洲精品| 91麻豆国产福利在线观看| 精品1区2区3区| 久久久久久久久99精品| 成人免费小视频| 免费高清成人在线| 成人av中文字幕| 91精品国模一区二区三区| 久久久综合激的五月天| 亚洲精品日产精品乱码不卡| 青青草成人在线观看| 99视频一区二区三区| 日韩一卡二卡三卡国产欧美| 国产精品乱码一区二区三区软件| 亚洲一区二区黄色| 国产成人在线视频网址| 欧美日韩成人在线一区| 国产免费观看久久| 日本欧美肥老太交大片| 99久久精品国产毛片| 欧美v亚洲v综合ⅴ国产v| 亚洲精品免费在线播放| 国产福利一区二区三区视频在线 | 国产真实乱子伦精品视频| 不卡一区二区三区四区| 日韩精品一区二区三区在线观看 | 久久不见久久见免费视频1 | 国产成人av福利| 日韩欧美一级精品久久| 香蕉成人伊视频在线观看|